-
@ 04c915da:3dfbecc9
2025-05-20 15:53:48This piece is the first in a series that will focus on things I think are a priority if your focus is similar to mine: building a strong family and safeguarding their future.
Choosing the ideal place to raise a family is one of the most significant decisions you will ever make. For simplicity sake I will break down my thought process into key factors: strong property rights, the ability to grow your own food, access to fresh water, the freedom to own and train with guns, and a dependable community.
A Jurisdiction with Strong Property Rights
Strong property rights are essential and allow you to build on a solid foundation that is less likely to break underneath you. Regions with a history of limited government and clear legal protections for landowners are ideal. Personally I think the US is the single best option globally, but within the US there is a wide difference between which state you choose. Choose carefully and thoughtfully, think long term. Obviously if you are not American this is not a realistic option for you, there are other solid options available especially if your family has mobility. I understand many do not have this capability to easily move, consider that your first priority, making movement and jurisdiction choice possible in the first place.
Abundant Access to Fresh Water
Water is life. I cannot overstate the importance of living somewhere with reliable, clean, and abundant freshwater. Some regions face water scarcity or heavy regulations on usage, so prioritizing a place where water is plentiful and your rights to it are protected is critical. Ideally you should have well access so you are not tied to municipal water supplies. In times of crisis or chaos well water cannot be easily shutoff or disrupted. If you live in an area that is drought prone, you are one drought away from societal chaos. Not enough people appreciate this simple fact.
Grow Your Own Food
A location with fertile soil, a favorable climate, and enough space for a small homestead or at the very least a garden is key. In stable times, a small homestead provides good food and important education for your family. In times of chaos your family being able to grow and raise healthy food provides a level of self sufficiency that many others will lack. Look for areas with minimal restrictions, good weather, and a culture that supports local farming.
Guns
The ability to defend your family is fundamental. A location where you can legally and easily own guns is a must. Look for places with a strong gun culture and a political history of protecting those rights. Owning one or two guns is not enough and without proper training they will be a liability rather than a benefit. Get comfortable and proficient. Never stop improving your skills. If the time comes that you must use a gun to defend your family, the skills must be instinct. Practice. Practice. Practice.
A Strong Community You Can Depend On
No one thrives alone. A ride or die community that rallies together in tough times is invaluable. Seek out a place where people know their neighbors, share similar values, and are quick to lend a hand. Lead by example and become a good neighbor, people will naturally respond in kind. Small towns are ideal, if possible, but living outside of a major city can be a solid balance in terms of work opportunities and family security.
Let me know if you found this helpful. My plan is to break down how I think about these five key subjects in future posts.
-
@ a19caaa8:88985eaf
2025-05-05 02:55:57↓ジャック(twitter創業者)のツイート nostr:nevent1qvzqqqqqqypzpq35r7yzkm4te5460u00jz4djcw0qa90zku7739qn7wj4ralhe4zqy28wumn8ghj7un9d3shjtnyv9kh2uewd9hsqg9cdxf7s7kg8kj70a4v5j94urz8kmel03d5a47tr4v6lx9umu3c95072732
↓それに絡むたゃ nostr:note1hr4m0d2k2cvv0yg5xtmpuma0hsxfpgcs2lxe7vlyhz30mfq8hf8qp8xmau
↓たゃのひとりごと nostr:nevent1qqsdt9p9un2lhsa8n27y7gnr640qdjl5n2sg0dh4kmxpqget9qsufngsvfsln nostr:note14p9prp46utd3j6mpqwv46m3r7u7cz6tah2v7tffjgledg5m4uy9qzfc2zf
↓有識者様の助言 nostr:nevent1qvzqqqqqqypzpujqe8p9zrpuv0f4ykk3rmgnqa6p6r0lan0t8ewd0ksj89kqcz5xqqst8w0773wxnkl8sn94tvmd3razcvms0kxjwe00rvgazp9ljjlv0wq0krtvt nostr:nevent1qvzqqqqqqypzpujqe8p9zrpuv0f4ykk3rmgnqa6p6r0lan0t8ewd0ksj89kqcz5xqqsxchzm7s7vn8a82q40yss3a84583chvd9szl9qc3w5ud7pr9ugengcgt9qx
↓たゃ nostr:nevent1qqsp2rxvpax6ks45tuzhzlq94hq6qtm47w69z8p5wepgq9u4txaw88s554jkd
-
@ f11e91c5:59a0b04a
2025-04-30 07:52:21!!!2022-07-07に書かれた記事です。
暗号通貨とかでお弁当売ってます 11:30〜14:00ぐらいでやってます
◆住所 木曜日・東京都渋谷区宇田川町41 (アベマタワーの下らへん)
◆お値段
Monacoin 3.9mona
Bitzeny 390zny
Bitcoin 3900sats (#lightningNetwork)
Ethereum 0.0039Ether(#zkSync)
39=thank you. (円を基準にしてません)
最近は週に一回になりました。 他の日はキッチンカーの現場を探したり色々してます。 東京都内で平日ランチ出店出来そうな場所があればぜひご連絡を!
写真はNFCタグです。 スマホにウォレットがあればタッチして3900satsで決済出来ます。 正直こんな怪しい手書きのNFCタグなんて絶対にビットコイナーは触りたくも無いだろうなと思いますが、これでも良いんだぜというメッセージです。
今までbtcpayのposでしたが速度を追求してこれに変更しました。 たまに上手くいかないですがそしたら渋々POS出すので温かい目でよろしくお願いします。
ノードを建てたり決済したりで1年経ちました。 最近も少しずつノードを建てる方が増えてるみたいで本当凄いですねUmbrel 大体の人がルーティングに果敢に挑むのを見つつ 奥さんに土下座しながら費用を捻出する弱小の私は決済の利便性を全開で振り切るしか無いので応援よろしくお願いします。
あえて あえて言うのであれば、ルーティングも楽しいですけど やはり本当の意味での即時決済や相手を選んでチャネルを繋げる楽しさもあるよとお伝えしたいっ!! 決済を受け入れないと分からない所ですが 承認がいらない時点で画期的です。
QRでもタッチでも金額指定でも入力でも もうやりようには出来てしまうし進化が恐ろしく早いので1番利用の多いpaypayの手数料(事業者側のね)を考えたらビットコイン凄いじゃない!と叫びたくなる。 が、やはり税制面や価格の変動(うちはBTC固定だけども)ウォレットの操作や普及率を考えるとまぁ難しい所もあるんですかね。
それでも継続的に沢山の人が色んな活動をしてるので私も何か出来ることがあれば 今後も奥さんに土下座しながら頑張って行きたいと思います。
(Originally posted 2022-07-07)
I sell bento lunches for cryptocurrency. We’re open roughly 11:30 a.m. – 2:00 p.m. Address Thursdays – 41 Udagawa-chō, Shibuya-ku, Tokyo (around the base of Abema Tower)
Prices Coin Price Note Monacoin 3.9 MONA
Bitzeny 390 ZNY Bitcoin 3,900 sats (Lightning Network)
Ethereum 0.0039 ETH (zkSync) “39” sounds like “thank you” in Japanese. Prices aren’t pegged to yen.These days I’m open only once a week. On other days I’m out scouting new spots for the kitchen-car. If you know weekday-lunch locations inside Tokyo where I could set up, please let me know!
The photo shows an NFC tag. If your phone has a Lightning wallet, just tap and pay 3,900 sats. I admit this hand-written NFC tag looks shady—any self-respecting Bitcoiner probably wouldn’t want to tap it—but the point is: even this works!
I used to run a BTCPay POS, but I switched to this setup for speed. Sometimes the tap payment fails; if that happens I reluctantly pull out the old POS. Thanks for your patience.
It’s been one year since I spun up a node and started accepting Lightning payments. So many people are now running their own nodes—Umbrel really is amazing. While the big players bravely chase routing fees, I’m a tiny operator scraping together funds while begging my wife for forgiveness, so I’m all-in on maximising payment convenience. Your support means a lot!
If I may add: routing is fun, but instant, trust-minimised payments and the thrill of choosing whom to open channels with are just as exciting. You’ll only understand once you start accepting payments yourself—zero-confirmation settlement really is revolutionary.
QR codes, NFC taps, fixed amounts, manual entry… the possibilities keep multiplying, and the pace of innovation is scary fast. When I compare it to the merchant fees on Japan’s most-used service, PayPay, I want to shout: “Bitcoin is incredible!” Sure, taxes, price volatility (my shop is BTC-denominated, though), wallet UX, and adoption hurdles are still pain points.
Even so, lots of people keep building cool stuff, so I’ll keep doing what I can—still on my knees to my wife, but moving forward!
-
@ 866e0139:6a9334e5
2025-05-23 17:57:24Autor: Caitlin Johnstone. Dieser Beitrag wurde mit dem Pareto-Client geschrieben. Sie finden alle Texte der Friedenstaube und weitere Texte zum Thema Frieden hier. Die neuesten Pareto-Artikel finden Sie in unserem Telegram-Kanal.
Die neuesten Artikel der Friedenstaube gibt es jetzt auch im eigenen Friedenstaube-Telegram-Kanal.
Ich hörte einem jungen Autor zu, der eine Idee beschrieb, die ihn so sehr begeisterte, dass er die Nacht zuvor nicht schlafen konnte. Und ich erinnerte mich daran, wie ich mich früher – vor Gaza – über das Schreiben freuen konnte. Dieses Gefühl habe ich seit 2023 nicht mehr gespürt.
Ich beklage mich nicht und bemitleide mich auch nicht selbst, ich stelle einfach fest, wie unglaublich düster und finster die Welt in dieser schrecklichen Zeit geworden ist. Es wäre seltsam und ungesund, wenn ich in den letzten anderthalb Jahren Freude an meiner Arbeit gehabt hätte. Diese Dinge sollen sich nicht gut anfühlen. Nicht, wenn man wirklich hinschaut und ehrlich zu sich selbst ist in dem, was man sieht.
Es war die ganze Zeit über so hässlich und so verstörend. Es gibt eigentlich keinen Weg, all diesen Horror umzudeuten oder irgendwie erträglich zu machen. Alles, was man tun kann, ist, an sich selbst zu arbeiten, um genug inneren Raum zu schaffen, um die schlechten Gefühle zuzulassen und sie ganz durchzufühlen, bis sie sich ausgedrückt haben. Lass die Verzweiflung herein. Die Trauer. Die Wut. Den Schmerz. Lass sie deinen Körper vollständig durchfließen, ohne Widerstand, und steh dann auf und schreibe das nächste Stück.
Das ist es, was Schreiben für mich jetzt ist. Es ist nie etwas, worüber ich mich freue, es zu teilen, oder wofür ich von Inspiration erfüllt bin. Wenn überhaupt, dann fühlt es sich eher so an wie: „Okay, hier bitte, es tut mir schrecklich leid, dass ich euch das zeigen muss, Leute.“ Es ist das Starren in die Dunkelheit, in das Blut, in das Gemetzel, in die gequälten Gesichter – und das Aufschreiben dessen, was ich sehe, Tag für Tag.
Nichts daran ist angenehm oder befriedigend. Es ist einfach das, was man tut, wenn ein Genozid in Echtzeit vor den eigenen Augen stattfindet, mit der Unterstützung der eigenen Gesellschaft. Alles daran ist entsetzlich, und es gibt keinen Weg, das schönzureden – aber man tut, was getan werden muss. So, wie man es täte, wenn es die eigene Familie wäre, die da draußen im Schutt liegt.
Dieser Genozid hat mich für immer verändert. Er hat viele Menschen für immer verändert. Wir werden nie wieder dieselben sein. Die Welt wird nie wieder dieselbe sein. Ganz gleich, was passiert oder wie dieser Albtraum endet – die Dinge werden nie wieder so sein wie zuvor.
Und das sollten sie auch nicht. Der Holocaust von Gaza ist das Ergebnis der Welt, wie sie vor ihm war. Unsere Gesellschaft hat ihn hervorgebracht – und jetzt starrt er uns allen direkt ins Gesicht. Das sind wir. Das ist die Frucht des Baumes, den die westliche Zivilisation bis zu diesem Punkt gepflegt hat.
Jetzt geht es nur noch darum, alles zu tun, was wir können, um den Genozid zu beenden – und sicherzustellen, dass die Welt die richtigen Lehren daraus zieht. Das ist eines der würdigsten Anliegen, denen man sich in diesem Leben widmen kann.
Ich habe noch immer Hoffnung, dass wir eine gesunde Welt haben können. Ich habe noch immer Hoffnung, dass das Schreiben über das, was geschieht, eines Tages wieder Freude bereiten kann. Aber diese Dinge liegen auf der anderen Seite eines langen, schmerzhaften, konfrontierenden Weges, der in den kommenden Jahren vor uns liegt. Es gibt keinen Weg daran vorbei.
Die Welt kann keinen Frieden und kein Glück finden, solange wir uns nicht vollständig damit auseinandergesetzt haben, was wir Gaza angetan haben.
Dieser Text ist die deutsche Übersetzung dieses Substack-Artikels von Caitlin Johnstone.
LASSEN SIE DER FRIEDENSTAUBE FLÜGEL WACHSEN!
Hier können Sie die Friedenstaube abonnieren und bekommen die Artikel zugesandt.
Schon jetzt können Sie uns unterstützen:
- Für 50 CHF/EURO bekommen Sie ein Jahresabo der Friedenstaube.
- Für 120 CHF/EURO bekommen Sie ein Jahresabo und ein T-Shirt/Hoodie mit der Friedenstaube.
- Für 500 CHF/EURO werden Sie Förderer und bekommen ein lebenslanges Abo sowie ein T-Shirt/Hoodie mit der Friedenstaube.
- Ab 1000 CHF werden Sie Genossenschafter der Friedenstaube mit Stimmrecht (und bekommen lebenslanges Abo, T-Shirt/Hoodie).
Für Einzahlungen in CHF (Betreff: Friedenstaube):
Für Einzahlungen in Euro:
Milosz Matuschek
IBAN DE 53710520500000814137
BYLADEM1TST
Sparkasse Traunstein-Trostberg
Betreff: Friedenstaube
Wenn Sie auf anderem Wege beitragen wollen, schreiben Sie die Friedenstaube an: friedenstaube@pareto.space
Sie sind noch nicht auf Nostr and wollen die volle Erfahrung machen (liken, kommentieren etc.)? Zappen können Sie den Autor auch ohne Nostr-Profil! Erstellen Sie sich einen Account auf Start. Weitere Onboarding-Leitfäden gibt es im Pareto-Wiki.
-
@ b0a838f2:34ed3f19
2025-05-23 17:57:18- Apaxy - Theme built to enhance the experience of browsing web directories, using the mod_autoindex Apache module and some CSS to override the default style of a directory listing. (Source Code)
GPL-3.0
Javascript
- copyparty - Portable file server with accelerated resumable uploads, deduplication, WebDAV, FTP, zeroconf, media indexer, video thumbnails, audio transcoding, and write-only folders, in a single file with no mandatory dependencies. (Demo)
MIT
Python
- DirectoryLister - Simple PHP based directory lister that lists a directory and all its sub-directories and allows you to navigate there within. (Source Code)
MIT
PHP
- filebrowser - Web File Browser with a Material Design web interface. (Source Code)
Apache-2.0
Go
- FileGator - FileGator is a powerful multi-user file manager with a single page front-end. (Demo, Source Code)
MIT
PHP/Docker
- Filestash - Web file manager that lets you manage your data anywhere it is located: FTP, SFTP, WebDAV, Git, S3, Minio, Dropbox, or Google Drive. (Demo, Source Code)
AGPL-3.0
Docker
- Gossa - Light and simple webserver for your files.
MIT
Go
- IFM - Single script file manager.
MIT
PHP
- mikochi - Browse remote folders, upload files, delete, rename, download and stream files to VLC/mpv.
MIT
Go/Docker/K8S
- miniserve - CLI tool to serve files and dirs over HTTP.
MIT
Rust
- ResourceSpace - ResourceSpace open source digital asset management software is the simple, fast, and free way to organise your digital assets. (Demo, Source Code)
BSD-4-Clause
PHP
- Surfer - Simple static file server with webui to manage files.
MIT
Nodejs
- TagSpaces - TagSpaces is an offline, cross-platform file manager and organiser that also can function as a note taking app. The WebDAV version of the application can be installed on top of a WebDAV servers such as Nextcloud or ownCloud. (Demo, Source Code)
AGPL-3.0
Nodejs
- Tiny File Manager - Web based File Manager in PHP, simple, fast and small file manager with a single file. (Demo, Source Code)
GPL-3.0
PHP
- Apaxy - Theme built to enhance the experience of browsing web directories, using the mod_autoindex Apache module and some CSS to override the default style of a directory listing. (Source Code)
-
@ 66675158:1b644430
2025-03-23 11:39:41I don't believe in "vibe coding" – it's just the newest Silicon Valley fad trying to give meaning to their latest favorite technology, LLMs. We've seen this pattern before with blockchain, when suddenly Non Fungible Tokens appeared, followed by Web3 startups promising to revolutionize everything from social media to supply chains. VCs couldn't throw money fast enough at anything with "decentralized" (in name only) in the pitch deck. Andreessen Horowitz launched billion-dollar crypto funds, while Y Combinator batches filled with blockchain startups promising to be "Uber for X, but on the blockchain."
The metaverse mania followed, with Meta betting its future on digital worlds where we'd supposedly hang out as legless avatars. Decentralized (in name only) autonomous organizations emerged as the next big thing – supposedly democratic internet communities that ended up being the next scam for quick money.
Then came the inevitable collapse. The FTX implosion in late 2022 revealed fraud, Luna/Terra's death spiral wiped out billions (including my ten thousand dollars), while Celsius and BlockFi froze customer assets before bankruptcy.
By 2023, crypto winter had fully set in. The SEC started aggressive enforcement actions, while users realized that blockchain technology had delivered almost no practical value despite a decade of promises.
Blockchain's promises tapped into fundamental human desires – decentralization resonated with a generation disillusioned by traditional institutions. Evangelists presented a utopian vision of freedom from centralized control. Perhaps most significantly, crypto offered a sense of meaning in an increasingly abstract world, making the clear signs of scams harder to notice.
The technology itself had failed to solve any real-world problems at scale. By 2024, the once-mighty crypto ecosystem had become a cautionary tale. Venture firms quietly scrubbed blockchain references from their websites while founders pivoted to AI and large language models.
Most reading this are likely fellow bitcoiners and nostr users who understand that Bitcoin is blockchain's only valid use case. But I shared that painful history because I believe the AI-hype cycle will follow the same trajectory.
Just like with blockchain, we're now seeing VCs who once couldn't stop talking about "Web3" falling over themselves to fund anything with "AI" in the pitch deck. The buzzwords have simply changed from "decentralized" to "intelligent."
"Vibe coding" is the perfect example – a trendy name for what is essentially just fuzzy instructions to LLMs. Developers who've spent years honing programming skills are now supposed to believe that "vibing" with an AI is somehow a legitimate methodology.
This might be controversial to some, but obvious to others:
Formal, context-free grammar will always remain essential for building precise systems, regardless of how advanced natural language technology becomes
The mathematical precision of programming languages provides a foundation that human language's ambiguity can never replace. Programming requires precision – languages, compilers, and processors operate on explicit instructions, not vibes. What "vibe coding" advocates miss is that beneath every AI-generated snippet lies the same deterministic rules that have always governed computation.
LLMs don't understand code in any meaningful sense—they've just ingested enormous datasets of human-written code and can predict patterns. When they "work," it's because they've seen similar patterns before, not because they comprehend the underlying logic.
This creates a dangerous dependency. Junior developers "vibing" with LLMs might get working code without understanding the fundamental principles. When something breaks in production, they'll lack the knowledge to fix it.
Even experienced developers can find themselves in treacherous territory when relying too heavily on LLM-generated code. What starts as a productivity boost can transform into a dependency crutch.
The real danger isn't just technical limitations, but the false confidence it instills. Developers begin to believe they understand systems they've merely instructed an AI to generate – fundamentally different from understanding code you've written yourself.
We're already seeing the warning signs: projects cobbled together with LLM-generated code that work initially but become maintenance nightmares when requirements change or edge cases emerge.
The venture capital money is flowing exactly as it did with blockchain. Anthropic raised billions, OpenAI is valued astronomically despite minimal revenue, and countless others are competing to build ever-larger models with vague promises. Every startup now claims to be "AI-powered" regardless of whether it makes sense.
Don't get me wrong—there's genuine innovation happening in AI research. But "vibe coding" isn't it. It's a marketing term designed to make fuzzy prompting sound revolutionary.
Cursor perfectly embodies this AI hype cycle. It's an AI-enhanced code editor built on VS Code that promises to revolutionize programming by letting you "chat with your codebase." Just like blockchain startups promised to "revolutionize" industries, Cursor promises to transform development by adding LLM capabilities.
Yes, Cursor can be genuinely helpful. It can explain unfamiliar code, suggest completions, and help debug simple issues. After trying it for just an hour, I found the autocomplete to be MAGICAL for simple refactoring and basic functionality.
But the marketing goes far beyond reality. The suggestion that you can simply describe what you want and get production-ready code is dangerously misleading. What you get are approximations with:
- Security vulnerabilities the model doesn't understand
- Edge cases it hasn't considered
- Performance implications it can't reason about
- Dependency conflicts it has no way to foresee
The most concerning aspect is how such tools are marketed to beginners as shortcuts around learning fundamentals. "Why spend years learning to code when you can just tell AI what you want?" This is reminiscent of how crypto was sold as a get-rich-quick scheme requiring no actual understanding.
When you "vibe code" with an AI, you're not eliminating complexity—you're outsourcing understanding to a black box. This creates developers who can prompt but not program, who can generate but not comprehend.
The real utility of LLMs in development is in augmenting existing workflows:
- Explaining unfamiliar codebases
- Generating boilerplate for well-understood patterns
- Suggesting implementations that a developer evaluates critically
- Assisting with documentation and testing
These uses involve the model as a subordinate assistant to a knowledgeable developer, not as a replacement for expertise. This is where the technology adds value—as a sophisticated tool in skilled hands.
Cursor is just a better hammer, not a replacement for understanding what you're building. The actual value emerges when used by developers who understand what happens beneath the abstractions. They can recognize when AI suggestions make sense and when they don't because they have the fundamental knowledge to evaluate output critically.
This is precisely where the "vibe coding" narrative falls apart.
-
@ 91bea5cd:1df4451c
2025-04-15 06:27:28Básico
bash lsblk # Lista todos os diretorios montados.
Para criar o sistema de arquivos:
bash mkfs.btrfs -L "ThePool" -f /dev/sdx
Criando um subvolume:
bash btrfs subvolume create SubVol
Montando Sistema de Arquivos:
bash mount -o compress=zlib,subvol=SubVol,autodefrag /dev/sdx /mnt
Lista os discos formatados no diretório:
bash btrfs filesystem show /mnt
Adiciona novo disco ao subvolume:
bash btrfs device add -f /dev/sdy /mnt
Lista novamente os discos do subvolume:
bash btrfs filesystem show /mnt
Exibe uso dos discos do subvolume:
bash btrfs filesystem df /mnt
Balancea os dados entre os discos sobre raid1:
bash btrfs filesystem balance start -dconvert=raid1 -mconvert=raid1 /mnt
Scrub é uma passagem por todos os dados e metadados do sistema de arquivos e verifica as somas de verificação. Se uma cópia válida estiver disponível (perfis de grupo de blocos replicados), a danificada será reparada. Todas as cópias dos perfis replicados são validadas.
iniciar o processo de depuração :
bash btrfs scrub start /mnt
ver o status do processo de depuração Btrfs em execução:
bash btrfs scrub status /mnt
ver o status do scrub Btrfs para cada um dos dispositivos
bash btrfs scrub status -d / data btrfs scrub cancel / data
Para retomar o processo de depuração do Btrfs que você cancelou ou pausou:
btrfs scrub resume / data
Listando os subvolumes:
bash btrfs subvolume list /Reports
Criando um instantâneo dos subvolumes:
Aqui, estamos criando um instantâneo de leitura e gravação chamado snap de marketing do subvolume de marketing.
bash btrfs subvolume snapshot /Reports/marketing /Reports/marketing-snap
Além disso, você pode criar um instantâneo somente leitura usando o sinalizador -r conforme mostrado. O marketing-rosnap é um instantâneo somente leitura do subvolume de marketing
bash btrfs subvolume snapshot -r /Reports/marketing /Reports/marketing-rosnap
Forçar a sincronização do sistema de arquivos usando o utilitário 'sync'
Para forçar a sincronização do sistema de arquivos, invoque a opção de sincronização conforme mostrado. Observe que o sistema de arquivos já deve estar montado para que o processo de sincronização continue com sucesso.
bash btrfs filsystem sync /Reports
Para excluir o dispositivo do sistema de arquivos, use o comando device delete conforme mostrado.
bash btrfs device delete /dev/sdc /Reports
Para sondar o status de um scrub, use o comando scrub status com a opção -dR .
bash btrfs scrub status -dR / Relatórios
Para cancelar a execução do scrub, use o comando scrub cancel .
bash $ sudo btrfs scrub cancel / Reports
Para retomar ou continuar com uma depuração interrompida anteriormente, execute o comando de cancelamento de depuração
bash sudo btrfs scrub resume /Reports
mostra o uso do dispositivo de armazenamento:
btrfs filesystem usage /data
Para distribuir os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID (incluindo o dispositivo de armazenamento recém-adicionado) montados no diretório /data , execute o seguinte comando:
sudo btrfs balance start --full-balance /data
Pode demorar um pouco para espalhar os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID se ele contiver muitos dados.
Opções importantes de montagem Btrfs
Nesta seção, vou explicar algumas das importantes opções de montagem do Btrfs. Então vamos começar.
As opções de montagem Btrfs mais importantes são:
**1. acl e noacl
**ACL gerencia permissões de usuários e grupos para os arquivos/diretórios do sistema de arquivos Btrfs.
A opção de montagem acl Btrfs habilita ACL. Para desabilitar a ACL, você pode usar a opção de montagem noacl .
Por padrão, a ACL está habilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem acl por padrão.
**2. autodefrag e noautodefrag
**Desfragmentar um sistema de arquivos Btrfs melhorará o desempenho do sistema de arquivos reduzindo a fragmentação de dados.
A opção de montagem autodefrag permite a desfragmentação automática do sistema de arquivos Btrfs.
A opção de montagem noautodefrag desativa a desfragmentação automática do sistema de arquivos Btrfs.
Por padrão, a desfragmentação automática está desabilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem noautodefrag por padrão.
**3. compactar e compactar-forçar
**Controla a compactação de dados no nível do sistema de arquivos do sistema de arquivos Btrfs.
A opção compactar compacta apenas os arquivos que valem a pena compactar (se compactar o arquivo economizar espaço em disco).
A opção compress-force compacta todos os arquivos do sistema de arquivos Btrfs, mesmo que a compactação do arquivo aumente seu tamanho.
O sistema de arquivos Btrfs suporta muitos algoritmos de compactação e cada um dos algoritmos de compactação possui diferentes níveis de compactação.
Os algoritmos de compactação suportados pelo Btrfs são: lzo , zlib (nível 1 a 9) e zstd (nível 1 a 15).
Você pode especificar qual algoritmo de compactação usar para o sistema de arquivos Btrfs com uma das seguintes opções de montagem:
- compress=algoritmo:nível
- compress-force=algoritmo:nível
Para obter mais informações, consulte meu artigo Como habilitar a compactação do sistema de arquivos Btrfs .
**4. subvol e subvolid
**Estas opções de montagem são usadas para montar separadamente um subvolume específico de um sistema de arquivos Btrfs.
A opção de montagem subvol é usada para montar o subvolume de um sistema de arquivos Btrfs usando seu caminho relativo.
A opção de montagem subvolid é usada para montar o subvolume de um sistema de arquivos Btrfs usando o ID do subvolume.
Para obter mais informações, consulte meu artigo Como criar e montar subvolumes Btrfs .
**5. dispositivo
A opção de montagem de dispositivo** é usada no sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs.
Em alguns casos, o sistema operacional pode falhar ao detectar os dispositivos de armazenamento usados em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs. Nesses casos, você pode usar a opção de montagem do dispositivo para especificar os dispositivos que deseja usar para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar a opção de montagem de dispositivo várias vezes para carregar diferentes dispositivos de armazenamento para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar o nome do dispositivo (ou seja, sdb , sdc ) ou UUID , UUID_SUB ou PARTUUID do dispositivo de armazenamento com a opção de montagem do dispositivo para identificar o dispositivo de armazenamento.
Por exemplo,
- dispositivo=/dev/sdb
- dispositivo=/dev/sdb,dispositivo=/dev/sdc
- dispositivo=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d
- device=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d,device=UUID_SUB=f7ce4875-0874-436a-b47d-3edef66d3424
**6. degraded
A opção de montagem degradada** permite que um RAID Btrfs seja montado com menos dispositivos de armazenamento do que o perfil RAID requer.
Por exemplo, o perfil raid1 requer a presença de 2 dispositivos de armazenamento. Se um dos dispositivos de armazenamento não estiver disponível em qualquer caso, você usa a opção de montagem degradada para montar o RAID mesmo que 1 de 2 dispositivos de armazenamento esteja disponível.
**7. commit
A opção commit** mount é usada para definir o intervalo (em segundos) dentro do qual os dados serão gravados no dispositivo de armazenamento.
O padrão é definido como 30 segundos.
Para definir o intervalo de confirmação para 15 segundos, você pode usar a opção de montagem commit=15 (digamos).
**8. ssd e nossd
A opção de montagem ssd** informa ao sistema de arquivos Btrfs que o sistema de arquivos está usando um dispositivo de armazenamento SSD, e o sistema de arquivos Btrfs faz a otimização SSD necessária.
A opção de montagem nossd desativa a otimização do SSD.
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem de SSD será habilitada. Caso contrário, a opção de montagem nossd é habilitada.
**9. ssd_spread e nossd_spread
A opção de montagem ssd_spread** tenta alocar grandes blocos contínuos de espaço não utilizado do SSD. Esse recurso melhora o desempenho de SSDs de baixo custo (baratos).
A opção de montagem nossd_spread desativa o recurso ssd_spread .
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem ssd_spread será habilitada. Caso contrário, a opção de montagem nossd_spread é habilitada.
**10. descarte e nodiscard
Se você estiver usando um SSD que suporte TRIM enfileirado assíncrono (SATA rev3.1), a opção de montagem de descarte** permitirá o descarte de blocos de arquivos liberados. Isso melhorará o desempenho do SSD.
Se o SSD não suportar TRIM enfileirado assíncrono, a opção de montagem de descarte prejudicará o desempenho do SSD. Nesse caso, a opção de montagem nodiscard deve ser usada.
Por padrão, a opção de montagem nodiscard é usada.
**11. norecovery
Se a opção de montagem norecovery** for usada, o sistema de arquivos Btrfs não tentará executar a operação de recuperação de dados no momento da montagem.
**12. usebackuproot e nousebackuproot
Se a opção de montagem usebackuproot for usada, o sistema de arquivos Btrfs tentará recuperar qualquer raiz de árvore ruim/corrompida no momento da montagem. O sistema de arquivos Btrfs pode armazenar várias raízes de árvore no sistema de arquivos. A opção de montagem usebackuproot** procurará uma boa raiz de árvore e usará a primeira boa que encontrar.
A opção de montagem nousebackuproot não verificará ou recuperará raízes de árvore inválidas/corrompidas no momento da montagem. Este é o comportamento padrão do sistema de arquivos Btrfs.
**13. space_cache, space_cache=version, nospace_cache e clear_cache
A opção de montagem space_cache** é usada para controlar o cache de espaço livre. O cache de espaço livre é usado para melhorar o desempenho da leitura do espaço livre do grupo de blocos do sistema de arquivos Btrfs na memória (RAM).
O sistema de arquivos Btrfs suporta 2 versões do cache de espaço livre: v1 (padrão) e v2
O mecanismo de cache de espaço livre v2 melhora o desempenho de sistemas de arquivos grandes (tamanho de vários terabytes).
Você pode usar a opção de montagem space_cache=v1 para definir a v1 do cache de espaço livre e a opção de montagem space_cache=v2 para definir a v2 do cache de espaço livre.
A opção de montagem clear_cache é usada para limpar o cache de espaço livre.
Quando o cache de espaço livre v2 é criado, o cache deve ser limpo para criar um cache de espaço livre v1 .
Portanto, para usar o cache de espaço livre v1 após a criação do cache de espaço livre v2 , as opções de montagem clear_cache e space_cache=v1 devem ser combinadas: clear_cache,space_cache=v1
A opção de montagem nospace_cache é usada para desabilitar o cache de espaço livre.
Para desabilitar o cache de espaço livre após a criação do cache v1 ou v2 , as opções de montagem nospace_cache e clear_cache devem ser combinadas: clear_cache,nosapce_cache
**14. skip_balance
Por padrão, a operação de balanceamento interrompida/pausada de um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs será retomada automaticamente assim que o sistema de arquivos Btrfs for montado. Para desabilitar a retomada automática da operação de equilíbrio interrompido/pausado em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs, você pode usar a opção de montagem skip_balance .**
**15. datacow e nodatacow
A opção datacow** mount habilita o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs. É o comportamento padrão.
Se você deseja desabilitar o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs para os arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatacow .
**16. datasum e nodatasum
A opção datasum** mount habilita a soma de verificação de dados para arquivos recém-criados do sistema de arquivos Btrfs. Este é o comportamento padrão.
Se você não quiser que o sistema de arquivos Btrfs faça a soma de verificação dos dados dos arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatasum .
Perfis Btrfs
Um perfil Btrfs é usado para informar ao sistema de arquivos Btrfs quantas cópias dos dados/metadados devem ser mantidas e quais níveis de RAID devem ser usados para os dados/metadados. O sistema de arquivos Btrfs contém muitos perfis. Entendê-los o ajudará a configurar um RAID Btrfs da maneira que você deseja.
Os perfis Btrfs disponíveis são os seguintes:
single : Se o perfil único for usado para os dados/metadados, apenas uma cópia dos dados/metadados será armazenada no sistema de arquivos, mesmo se você adicionar vários dispositivos de armazenamento ao sistema de arquivos. Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
dup : Se o perfil dup for usado para os dados/metadados, cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos manterá duas cópias dos dados/metadados. Assim, 50% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
raid0 : No perfil raid0 , os dados/metadados serão divididos igualmente em todos os dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, não haverá dados/metadados redundantes (duplicados). Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser usado. Se, em qualquer caso, um dos dispositivos de armazenamento falhar, todo o sistema de arquivos será corrompido. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid0 .
raid1 : No perfil raid1 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a uma falha de unidade. Mas você pode usar apenas 50% do espaço total em disco. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1 .
raid1c3 : No perfil raid1c3 , três cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a duas falhas de unidade, mas você pode usar apenas 33% do espaço total em disco. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c3 .
raid1c4 : No perfil raid1c4 , quatro cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a três falhas de unidade, mas você pode usar apenas 25% do espaço total em disco. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c4 .
raid10 : No perfil raid10 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos, como no perfil raid1 . Além disso, os dados/metadados serão divididos entre os dispositivos de armazenamento, como no perfil raid0 .
O perfil raid10 é um híbrido dos perfis raid1 e raid0 . Alguns dos dispositivos de armazenamento formam arrays raid1 e alguns desses arrays raid1 são usados para formar um array raid0 . Em uma configuração raid10 , o sistema de arquivos pode sobreviver a uma única falha de unidade em cada uma das matrizes raid1 .
Você pode usar 50% do espaço total em disco na configuração raid10 . Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid10 .
raid5 : No perfil raid5 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Uma única paridade será calculada e distribuída entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid5 , o sistema de arquivos pode sobreviver a uma única falha de unidade. Se uma unidade falhar, você pode adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir da paridade distribuída das unidades em execução.
Você pode usar 1 00x(N-1)/N % do total de espaços em disco na configuração raid5 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid5 .
raid6 : No perfil raid6 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Duas paridades serão calculadas e distribuídas entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid6 , o sistema de arquivos pode sobreviver a duas falhas de unidade ao mesmo tempo. Se uma unidade falhar, você poderá adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir das duas paridades distribuídas das unidades em execução.
Você pode usar 100x(N-2)/N % do espaço total em disco na configuração raid6 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid6 .
-
@ ec42c765:328c0600
2025-02-05 23:45:09test
test
-
@ 16f1a010:31b1074b
2025-03-20 14:32:25grain is a nostr relay built using Go, currently utilizing MongoDB as its database. Binaries are provided for AMD64 Windows and Linux. grain is Go Relay Architecture for Implementing Nostr
Introduction
grain is a nostr relay built using Go, currently utilizing MongoDB as its database. Binaries are provided for AMD64 Windows and Linux. grain is Go Relay Architecture for Implementing Nostr
Prerequisites
- Grain requires a running MongoDB instance. Please refer to this separate guide for instructions on setting up MongoDB: nostr:naddr1qvzqqqr4gupzq9h35qgq6n8ll0xyyv8gurjzjrx9sjwp4hry6ejnlks8cqcmzp6tqqxnzde5xg6rwwp5xsuryd3knfdr7g
Download Grain
Download the latest release for your system from the GitHub releases page
amd64 binaries provided for Windows and Linux, if you have a different CPU architecture, you can download and install go to build grain from source
Installation and Execution
- Create a new folder on your system where you want to run Grain.
- The downloaded binary comes bundled with a ZIP file containing a folder named "app," which holds the frontend HTML files. Unzip the "app" folder into the same directory as the Grain executable.
Run Grain
- Open your terminal or command prompt and navigate to the Grain directory.
- Execute the Grain binary.
on linux you will first have to make the program executable
chmod +x grain_linux_amd64
Then you can run the program
./grain_linux_amd64
(alternatively on windows, you can just double click the grain_windows_amd64.exe to start the relay)
You should see a terminal window displaying the port on which your relay and frontend are running.
If you get
Failed to copy app/static/examples/config.example.yml to config.yml: open app/static/examples/config.example.yml: no such file or directory
Then you probably forgot to put the app folder in the same directory as your executable or you did not unzip the folder.
Congrats! You're running grain 🌾!
You may want to change your NIP11 relay information document (relay_metadata.json) This informs clients of the capabilities, administrative contacts, and various server attributes. It's located in the same directory as your executable.
Configuration Files
Once Grain has been executed for the first time, it will generate the default configuration files inside the directory where the executable is located. These files are:
bash config.yml whitelist.yml blacklist.yml
Prerequisites: - Grain requires a running MongoDB instance. Please refer to this separate guide for instructions on setting up MongoDB: [Link to MongoDB setup guide].
Download Grain:
Download the latest release for your system from the GitHub releases page
amd64 binaries provided for Windows and Linux, if you have a different CPU architecture, you can download and install go to build grain from source
Installation and Execution:
- Create a new folder on your system where you want to run Grain.
- The downloaded binary comes bundled with a ZIP file containing a folder named "app," which holds the frontend HTML files. Unzip the "app" folder into the same directory as the Grain executable.
Run Grain:
- Open your terminal or command prompt and navigate to the Grain directory.
- Execute the Grain binary.
on linux you will first have to make the program executable
chmod +x grain_linux_amd64
Then you can run the program
./grain_linux_amd64
(alternatively on windows, you can just double click the grain_windows_amd64.exe to start the relay)
You should see a terminal window displaying the port on which your relay and frontend are running.
If you get
Failed to copy app/static/examples/config.example.yml to config.yml: open app/static/examples/config.example.yml: no such file or directory
Then you probably forgot to put the app folder in the same directory as your executable or you did not unzip the folder.
Congrats! You're running grain 🌾!
You may want to change your NIP11 relay information document (relay_metadata.json) This informs clients of the capabilities, administrative contacts, and various server attributes. It's located in the same directory as your executable.
Configuration Files:
Once Grain has been executed for the first time, it will generate the default configuration files inside the directory where the executable is located. These files are:
bash config.yml whitelist.yml blacklist.yml
Configuration Documentation
You can always find the latest example configs on my site or in the github repo here: config.yml
Config.yml
This
config.yml
file is where you customize how your Grain relay operates. Each section controls different aspects of the relay's behavior.1.
mongodb
(Database Settings)uri: mongodb://localhost:27017/
:- This is the connection string for your MongoDB database.
mongodb://localhost:27017/
indicates that your MongoDB server is running on the same computer as your Grain relay (localhost) and listening on port 27017 (the default MongoDB port).- If your MongoDB server is on a different machine, you'll need to change
localhost
to the server's IP address or hostname. - The trailing
/
indicates the root of the mongodb server. You will define the database in the next line.
database: grain
:- This specifies the name of the MongoDB database that Grain will use to store Nostr events. Grain will create this database if it doesn't already exist.
- You can name the database whatever you want. If you want to run multiple grain relays, you can and they can have different databases running on the same mongo server.
2.
server
(Relay Server Settings)port: :8181
:- This sets the port on which your Grain relay will listen for incoming nostr websocket connections and what port the frontend will be available at.
read_timeout: 10 # in seconds
:- This is the maximum time (in seconds) that the relay will wait for a client to send data before closing the connection.
write_timeout: 10 # in seconds
:- This is the maximum time (in seconds) that the relay will wait for a client to receive data before closing the connection.
idle_timeout: 120 # in seconds
:- This is the maximum time (in seconds) that the relay will keep a connection open if there's no activity.
max_connections: 100
:- This sets the maximum number of simultaneous client connections that the relay will allow.
max_subscriptions_per_client: 10
:- This sets the maximum amount of subscriptions a single client can request from the relay.
3.
resource_limits
(System Resource Limits)cpu_cores: 2 # Limit the number of CPU cores the application can use
:- This restricts the number of CPU cores that Grain can use. Useful for controlling resource usage on your server.
memory_mb: 1024 # Cap the maximum amount of RAM in MB the application can use
:- This limits the maximum amount of RAM (in megabytes) that Grain can use.
heap_size_mb: 512 # Set a limit on the Go garbage collector's heap size in MB
:- This sets a limit on the amount of memory that the Go programming language's garbage collector can use.
4.
auth
(Authentication Settings)enabled: false # Enable or disable AUTH handling
:- If set to
true
, this enables authentication handling, requiring clients to authenticate before using the relay.
- If set to
relay_url: "wss://relay.example.com/" # Specify the relay URL
:- If authentication is enabled, this is the url that clients will use to authenticate.
5.
UserSync
(User Synchronization)user_sync: false
:- If set to true, the relay will attempt to sync user data from other relays.
disable_at_startup: true
:- If user sync is enabled, this will prevent the sync from starting when the relay starts.
initial_sync_relays: [...]
:- A list of other relays to pull user data from.
kinds: []
:- A list of event kinds to pull from the other relays. Leaving this empty will pull all event kinds.
limit: 100
:- The limit of events to pull from the other relays.
exclude_non_whitelisted: true
:- If set to true, only users on the whitelist will have their data synced.
interval: 360
:- The interval in minutes that the relay will resync user data.
6.
backup_relay
(Backup Relay)enabled: false
:- If set to true, the relay will send copies of received events to the backup relay.
url: "wss://some-relay.com"
:- The url of the backup relay.
7.
event_purge
(Event Purging)enabled: false
:- If set to
true
, the relay will automatically delete old events.
- If set to
keep_interval_hours: 24
:- The number of hours to keep events before purging them.
purge_interval_minutes: 240
:- How often (in minutes) the purging process runs.
purge_by_category: ...
:- Allows you to specify which categories of events (regular, replaceable, addressable, deprecated) to purge.
purge_by_kind_enabled: false
:- If set to true, events will be purged based on the kinds listed below.
kinds_to_purge: ...
:- A list of event kinds to purge.
exclude_whitelisted: true
:- If set to true, events from whitelisted users will not be purged.
8.
event_time_constraints
(Event Time Constraints)min_created_at: 1577836800
:- The minimum
created_at
timestamp (Unix timestamp) that events must have to be accepted by the relay.
- The minimum
max_created_at_string: now+5m
:- The maximum created at time that an event can have. This example shows that the max created at time is 5 minutes in the future from the time the event is received.
min_created_at_string
andmax_created_at
work the same way.
9.
rate_limit
(Rate Limiting)ws_limit: 100
:- The maximum number of WebSocket messages per second that the relay will accept.
ws_burst: 200
:- Allows a temporary burst of WebSocket messages.
event_limit: 50
:- The maximum number of Nostr events per second that the relay will accept.
event_burst: 100
:- Allows a temporary burst of Nostr events.
req_limit: 50
:- The limit of http requests per second.
req_burst: 100
:- The allowed burst of http requests.
max_event_size: 51200
:- The maximum size (in bytes) of a Nostr event that the relay will accept.
kind_size_limits: ...
:- Allows you to set size limits for specific event kinds.
category_limits: ...
:- Allows you to set rate limits for different event categories (ephemeral, addressable, regular, replaceable).
kind_limits: ...
:- Allows you to set rate limits for specific event kinds.
By understanding these settings, you can tailor your Grain Nostr relay to meet your specific needs and resource constraints.
whitelist.yml
The
whitelist.yml
file is used to control which users, event kinds, and domains are allowed to interact with your Grain relay. Here's a breakdown of the settings:1.
pubkey_whitelist
(Public Key Whitelist)enabled: false
:- If set to
true
, this enables the public key whitelist. Only users whose public keys are listed will be allowed to publish events to your relay.
- If set to
pubkeys:
:- A list of hexadecimal public keys that are allowed to publish events.
pubkey1
andpubkey2
are placeholders, you will replace these with actual hexadecimal public keys.
npubs:
:- A list of npubs that are allowed to publish events.
npub18ls2km9aklhzw9yzqgjfu0anhz2z83hkeknw7sl22ptu8kfs3rjq54am44
andnpub2
are placeholders, replace them with actual npubs.- npubs are bech32 encoded public keys.
2.
kind_whitelist
(Event Kind Whitelist)enabled: false
:- If set to
true
, this enables the event kind whitelist. Only events with the specified kinds will be allowed.
- If set to
kinds:
:- A list of event kinds (as strings) that are allowed.
"1"
and"2"
are example kinds. Replace these with the kinds you want to allow.- Example kinds are 0 for metadata, 1 for short text notes, and 2 for recommend server.
3.
domain_whitelist
(Domain Whitelist)enabled: false
:- If set to
true
, this enables the domain whitelist. This checks the domains .well-known folder for their nostr.json. This file contains a list of pubkeys. They will be considered whitelisted if on this list.
- If set to
domains:
:- A list of domains that are allowed.
"example.com"
and"anotherdomain.com"
are example domains. Replace these with the domains you want to allow.
blacklist.yml
The
blacklist.yml
file allows you to block specific content, users, and words from your Grain relay. Here's a breakdown of the settings:1.
enabled: true
- This setting enables the blacklist functionality. If set to
true
, the relay will actively block content and users based on the rules defined in this file.
2.
permanent_ban_words:
- This section lists words that, if found in an event, will result in a permanent ban for the event's author.
- really bad word
is a placeholder. Replace it with any words you want to permanently block.
3.
temp_ban_words:
- This section lists words that, if found in an event, will result in a temporary ban for the event's author.
- crypto
,- web3
, and- airdrop
are examples. Replace them with the words you want to temporarily block.
4.
max_temp_bans: 3
- This sets the maximum number of temporary bans a user can receive before they are permanently banned.
5.
temp_ban_duration: 3600
- This sets the duration of a temporary ban in seconds.
3600
seconds equals one hour.
6.
permanent_blacklist_pubkeys:
- This section lists hexadecimal public keys that are permanently blocked from using the relay.
- db0c9b8acd6101adb9b281c5321f98f6eebb33c5719d230ed1870997538a9765
is an example. Replace it with the public keys you want to block.
7.
permanent_blacklist_npubs:
- This section lists npubs that are permanently blocked from using the relay.
- npub1x0r5gflnk2mn6h3c70nvnywpy2j46gzqwg6k7uw6fxswyz0md9qqnhshtn
is an example. Replace it with the npubs you want to block.- npubs are the human readable version of public keys.
8.
mutelist_authors:
- This section lists hexadecimal public keys of author of a kind1000 mutelist. Pubkey authors on this mutelist will be considered on the permanent blacklist. This provides a nostr native way to handle the backlist of your relay
- 3fe0ab6cbdb7ee27148202249e3fb3b89423c6f6cda6ef43ea5057c3d93088e4
is an example. Replace it with the public keys of authors that have a mutelist you would like to use as a blacklist. Consider using your own.- Important Note: The mutelist Event MUST be stored in this relay for it to be retrieved. This means your relay must have a copy of the authors kind10000 mutelist to consider them for the blacklist.
Running Grain as a Service:
Windows Service:
To run Grain as a Windows service, you can use tools like NSSM (Non-Sucking Service Manager). NSSM allows you to easily install and manage any application as a Windows service.
* For instructions on how to install NSSM, please refer to this article: [Link to NSSM install guide coming soon].
-
Open Command Prompt as Administrator:
- Open the Windows Start menu, type "cmd," right-click on "Command Prompt," and select "Run as administrator."
-
Navigate to NSSM Directory:
- Use the
cd
command to navigate to the directory where you extracted NSSM. For example, if you extracted it toC:\nssm
, you would typecd C:\nssm
and press Enter.
- Use the
-
Install the Grain Service:
- Run the command
nssm install grain
. - A GUI will appear, allowing you to configure the service.
- Run the command
-
Configure Service Details:
- In the "Path" field, enter the full path to your Grain executable (e.g.,
C:\grain\grain_windows_amd64.exe
). - In the "Startup directory" field, enter the directory where your Grain executable is located (e.g.,
C:\grain
).
- In the "Path" field, enter the full path to your Grain executable (e.g.,
-
Install the Service:
- Click the "Install service" button.
-
Manage the Service:
- You can now manage the Grain service using the Windows Services manager. Open the Start menu, type "services.msc," and press Enter. You can start, stop, pause, or restart the Grain service from there.
Linux Service (systemd):
To run Grain as a Linux service, you can use systemd, the standard service manager for most modern Linux distributions.
-
Create a Systemd Service File:
- Open a text editor with root privileges (e.g.,
sudo nano /etc/systemd/system/grain.service
).
- Open a text editor with root privileges (e.g.,
-
Add Service Configuration:
- Add the following content to the
grain.service
file, replacing the placeholders with your actual paths and user information:
```toml [Unit] Description=Grain Nostr Relay After=network.target
[Service] ExecStart=/path/to/grain_linux_amd64 WorkingDirectory=/path/to/grain/directory Restart=always User=your_user #replace your_user Group=your_group #replace your_group
[Install] WantedBy=multi-user.target ```
- Replace
/path/to/grain/executable
with the full path to your Grain executable. - Replace
/path/to/grain/directory
with the directory containing your Grain executable. - Replace
your_user
andyour_group
with the username and group that will run the Grain service.
- Add the following content to the
-
Reload Systemd:
- Run the command
sudo systemctl daemon-reload
to reload the systemd configuration.
- Run the command
-
Enable the Service:
- Run the command
sudo systemctl enable grain.service
to enable the service to start automatically on boot.
- Run the command
-
Start the Service:
- Run the command
sudo systemctl start grain.service
to start the service immediately.
- Run the command
-
Check Service Status:
- Run the command
sudo systemctl status grain.service
to check the status of the Grain service. This will show you if the service is running and any recent logs. - You can run
sudo journalctl -f -u grain.service
to watch the logs
- Run the command
More guides are in the works for setting up tailscale to access your relay from anywhere over a private network and for setting up a cloudflare tunnel to your domain to deploy a grain relay accessible on a subdomain of your site eg wss://relay.yourdomain.com
-
@ 460c25e6:ef85065c
2025-02-25 15:20:39If you don't know where your posts are, you might as well just stay in the centralized Twitter. You either take control of your relay lists, or they will control you. Amethyst offers several lists of relays for our users. We are going to go one by one to help clarify what they are and which options are best for each one.
Public Home/Outbox Relays
Home relays store all YOUR content: all your posts, likes, replies, lists, etc. It's your home. Amethyst will send your posts here first. Your followers will use these relays to get new posts from you. So, if you don't have anything there, they will not receive your updates.
Home relays must allow queries from anyone, ideally without the need to authenticate. They can limit writes to paid users without affecting anyone's experience.
This list should have a maximum of 3 relays. More than that will only make your followers waste their mobile data getting your posts. Keep it simple. Out of the 3 relays, I recommend: - 1 large public, international relay: nos.lol, nostr.mom, relay.damus.io, etc. - 1 personal relay to store a copy of all your content in a place no one can delete. Go to relay.tools and never be censored again. - 1 really fast relay located in your country: paid options like http://nostr.wine are great
Do not include relays that block users from seeing posts in this list. If you do, no one will see your posts.
Public Inbox Relays
This relay type receives all replies, comments, likes, and zaps to your posts. If you are not getting notifications or you don't see replies from your friends, it is likely because you don't have the right setup here. If you are getting too much spam in your replies, it's probably because your inbox relays are not protecting you enough. Paid relays can filter inbox spam out.
Inbox relays must allow anyone to write into them. It's the opposite of the outbox relay. They can limit who can download the posts to their paid subscribers without affecting anyone's experience.
This list should have a maximum of 3 relays as well. Again, keep it small. More than that will just make you spend more of your data plan downloading the same notifications from all these different servers. Out of the 3 relays, I recommend: - 1 large public, international relay: nos.lol, nostr.mom, relay.damus.io, etc. - 1 personal relay to store a copy of your notifications, invites, cashu tokens and zaps. - 1 really fast relay located in your country: go to nostr.watch and find relays in your country
Terrible options include: - nostr.wine should not be here. - filter.nostr.wine should not be here. - inbox.nostr.wine should not be here.
DM Inbox Relays
These are the relays used to receive DMs and private content. Others will use these relays to send DMs to you. If you don't have it setup, you will miss DMs. DM Inbox relays should accept any message from anyone, but only allow you to download them.
Generally speaking, you only need 3 for reliability. One of them should be a personal relay to make sure you have a copy of all your messages. The others can be open if you want push notifications or closed if you want full privacy.
Good options are: - inbox.nostr.wine and auth.nostr1.com: anyone can send messages and only you can download. Not even our push notification server has access to them to notify you. - a personal relay to make sure no one can censor you. Advanced settings on personal relays can also store your DMs privately. Talk to your relay operator for more details. - a public relay if you want DM notifications from our servers.
Make sure to add at least one public relay if you want to see DM notifications.
Private Home Relays
Private Relays are for things no one should see, like your drafts, lists, app settings, bookmarks etc. Ideally, these relays are either local or require authentication before posting AND downloading each user\'s content. There are no dedicated relays for this category yet, so I would use a local relay like Citrine on Android and a personal relay on relay.tools.
Keep in mind that if you choose a local relay only, a client on the desktop might not be able to see the drafts from clients on mobile and vice versa.
Search relays:
This is the list of relays to use on Amethyst's search and user tagging with @. Tagging and searching will not work if there is nothing here.. This option requires NIP-50 compliance from each relay. Hit the Default button to use all available options on existence today: - nostr.wine - relay.nostr.band - relay.noswhere.com
Local Relays:
This is your local storage. Everything will load faster if it comes from this relay. You should install Citrine on Android and write ws://localhost:4869 in this option.
General Relays:
This section contains the default relays used to download content from your follows. Notice how you can activate and deactivate the Home, Messages (old-style DMs), Chat (public chats), and Global options in each.
Keep 5-6 large relays on this list and activate them for as many categories (Home, Messages (old-style DMs), Chat, and Global) as possible.
Amethyst will provide additional recommendations to this list from your follows with information on which of your follows might need the additional relay in your list. Add them if you feel like you are missing their posts or if it is just taking too long to load them.
My setup
Here's what I use: 1. Go to relay.tools and create a relay for yourself. 2. Go to nostr.wine and pay for their subscription. 3. Go to inbox.nostr.wine and pay for their subscription. 4. Go to nostr.watch and find a good relay in your country. 5. Download Citrine to your phone.
Then, on your relay lists, put:
Public Home/Outbox Relays: - nostr.wine - nos.lol or an in-country relay. -
.nostr1.com Public Inbox Relays - nos.lol or an in-country relay -
.nostr1.com DM Inbox Relays - inbox.nostr.wine -
.nostr1.com Private Home Relays - ws://localhost:4869 (Citrine) -
.nostr1.com (if you want) Search Relays - nostr.wine - relay.nostr.band - relay.noswhere.com
Local Relays - ws://localhost:4869 (Citrine)
General Relays - nos.lol - relay.damus.io - relay.primal.net - nostr.mom
And a few of the recommended relays from Amethyst.
Final Considerations
Remember, relays can see what your Nostr client is requesting and downloading at all times. They can track what you see and see what you like. They can sell that information to the highest bidder, they can delete your content or content that a sponsor asked them to delete (like a negative review for instance) and they can censor you in any way they see fit. Before using any random free relay out there, make sure you trust its operator and you know its terms of service and privacy policies.
-
@ ec42c765:328c0600
2025-02-05 23:43:35test
-
@ ec42c765:328c0600
2025-02-05 23:38:12カスタム絵文字とは
任意のオリジナル画像を絵文字のように文中に挿入できる機能です。
また、リアクション(Twitterの いいね のような機能)にもカスタム絵文字を使えます。
カスタム絵文字の対応状況(2025/02/06)
カスタム絵文字を使うためにはカスタム絵文字に対応したクライアントを使う必要があります。
※表は一例です。クライアントは他にもたくさんあります。
使っているクライアントが対応していない場合は、クライアントを変更する、対応するまで待つ、開発者に要望を送る(または自分で実装する)などしましょう。
対応クライアント
ここではnostterを使って説明していきます。
準備
カスタム絵文字を使うための準備です。
- Nostrエクステンション(NIP-07)を導入する
- 使いたいカスタム絵文字をリストに登録する
Nostrエクステンション(NIP-07)を導入する
Nostrエクステンションは使いたいカスタム絵文字を登録する時に必要になります。
また、環境(パソコン、iPhone、androidなど)によって導入方法が違います。
Nostrエクステンションを導入する端末は、実際にNostrを閲覧する端末と違っても構いません(リスト登録はPC、Nostr閲覧はiPhoneなど)。
Nostrエクステンション(NIP-07)の導入方法は以下のページを参照してください。
ログイン拡張機能 (NIP-07)を使ってみよう | Welcome to Nostr! ~ Nostrをはじめよう! ~
少し面倒ですが、これを導入しておくとNostr上の様々な場面で役立つのでより快適になります。
使いたいカスタム絵文字をリストに登録する
以下のサイトで行います。
右上のGet startedからNostrエクステンションでログインしてください。
例として以下のカスタム絵文字を導入してみます。
実際より絵文字が少なく表示されることがありますが、古い状態のデータを取得してしまっているためです。その場合はブラウザの更新ボタンを押してください。
- 右側のOptionsからBookmarkを選択
これでカスタム絵文字を使用するためのリストに登録できます。
カスタム絵文字を使用する
例としてブラウザから使えるクライアント nostter から使用してみます。
nostterにNostrエクステンションでログイン、もしくは秘密鍵を入れてログインしてください。
文章中に使用
- 投稿ボタンを押して投稿ウィンドウを表示
- 顔😀のボタンを押し、絵文字ウィンドウを表示
- *タブを押し、カスタム絵文字一覧を表示
- カスタム絵文字を選択
- : 記号に挟まれたアルファベットのショートコードとして挿入される
この状態で投稿するとカスタム絵文字として表示されます。
カスタム絵文字対応クライアントを使っている他ユーザーにもカスタム絵文字として表示されます。
対応していないクライアントの場合、ショートコードのまま表示されます。
ショートコードを直接入力することでカスタム絵文字の候補が表示されるのでそこから選択することもできます。
リアクションに使用
- 任意の投稿の顔😀のボタンを押し、絵文字ウィンドウを表示
- *タブを押し、カスタム絵文字一覧を表示
- カスタム絵文字を選択
カスタム絵文字リアクションを送ることができます。
カスタム絵文字を探す
先述したemojitoからカスタム絵文字を探せます。
例えば任意のユーザーのページ emojito ロクヨウ から探したり、 emojito Browse all からnostr全体で最近作成、更新された絵文字を見たりできます。
また、以下のリンクは日本語圏ユーザーが作ったカスタム絵文字を集めたリストです(2025/02/06)
※漏れがあるかもしれません
各絵文字セットにあるOpen in emojitoのリンクからemojitoに飛び、使用リストに追加できます。
以上です。
次:Nostrのカスタム絵文字の作り方
Yakihonneリンク Nostrのカスタム絵文字の作り方
Nostrリンク nostr:naddr1qqxnzdesxuunzv358ycrgveeqgswcsk8v4qck0deepdtluag3a9rh0jh2d0wh0w9g53qg8a9x2xqvqqrqsqqqa28r5psx3
仕様
-
@ 97c70a44:ad98e322
2025-02-18 20:30:32For the last couple of weeks, I've been dealing with the fallout of upgrading a web application to Svelte 5. Complaints about framework churn and migration annoyances aside, I've run into some interesting issues with the migration. So far, I haven't seen many other people register the same issues, so I thought it might be constructive for me to articulate them myself.
I'll try not to complain too much in this post, since I'm grateful for the many years of Svelte 3/4 I've enjoyed. But I don't think I'll be choosing Svelte for any new projects going forward. I hope my reflections here will be useful to others as well.
If you're interested in reproductions for the issues I mention here, you can find them below.
The Need for Speed
To start with, let me just quickly acknowledge what the Svelte team is trying to do. It seems like most of the substantial changes in version 5 are built around "deep reactivity", which allows for more granular reactivity, leading to better performance. Performance is good, and the Svelte team has always excelled at reconciling performance with DX.
In previous versions of Svelte, the main way this was achieved was with the Svelte compiler. There were many ancillary techniques involved in improving performance, but having a framework compile step gave the Svelte team a lot of leeway for rearranging things under the hood without making developers learn new concepts. This is what made Svelte so original in the beginning.
At the same time, it resulted in an even more opaque framework than usual, making it harder for developers to debug more complex issues. To make matters worse, the compiler had bugs, resulting in errors which could only be fixed by blindly refactoring the problem component. This happened to me personally at least half a dozen times, and is what ultimately pushed me to migrate to Svelte 5.
Nevertheless, I always felt it was an acceptable trade-off for speed and productivity. Sure, sometimes I had to delete my project and port it to a fresh repository every so often, but the framework was truly a pleasure to use.
Svelte is not Javascript
Svelte 5 doubled down on this tradeoff — which makes sense, because it's what sets the framework apart. The difference this time is that the abstraction/performance tradeoff did not stay in compiler land, but intruded into runtime in two important ways:
- The use of proxies to support deep reactivity
- Implicit component lifecycle state
Both of these changes improved performance and made the API for developers look slicker. What's not to like? Unfortunately, both of these features are classic examples of a leaky abstraction, and ultimately make things more complex for developers, not less.
Proxies are not objects
The use of proxies seems to have allowed the Svelte team to squeeze a little more performance out of the framework, without asking developers to do any extra work. Threading state through multiple levels of components without provoking unnecessary re-renders in frameworks like React is an infamously difficult chore.
Svelte's compiler avoided some of the pitfalls associated with virtual DOM diffing solutions, but evidently there was still enough of a performance gain to be had to justify the introduction of proxies. The Svelte team also seems to argue that their introduction represents an improvement in developer experience:
we... can maximise both efficiency and ergonomics.
Here's the problem: Svelte 5 looks simpler, but actually introduces more abstractions.
Using proxies to monitor array methods (for example) is appealing because it allows developers to forget all the goofy heuristics involved with making sure state was reactive and just
push
to the array. I can't count how many times I've writtenvalue = value
to trigger reactivity in svelte 4.In Svelte 4, developers had to understand how the Svelte compiler worked. The compiler, being a leaky abstraction, forced its users to know that assignment was how you signaled reactivity. In svelte 5, developers can just "forget" about the compiler!
Except they can't. All the introduction of new abstractions really accomplishes is the introduction of more complex heuristics that developers have to keep in their heads in order to get the compiler to act the way they want it to.
In fact, this is why after years of using Svelte, I found myself using Svelte stores more and more often, and reactive declarations less. The reason being that Svelte stores are just javascript. Calling
update
on a store is simple, and being able to reference them with a$
was just a nice bonus — nothing to remember, and if I mess up the compiler yells at me.Proxies introduce a similar problem to reactive declarations, which is that they look like one thing but act like another on the edges.
When I started using Svelte 5, everything worked great — until I tried to save a proxy to indexeddb, at which point I got a
DataCloneError
. To make matters worse, it's impossible to reliably tell if something is aProxy
withouttry/catch
ing a structured clone, which is a performance-intensive operation.This forces the developer to remember what is and what isn't a Proxy, calling
$state.snapshot
every time they pass a proxy to a context that doesn't expect or know about them. This obviates all the nice abstractions they gave us in the first place.Components are not functions
The reason virtual DOM took off way back in 2013 was the ability to model your application as composed functions, each of which takes data and spits out HTML. Svelte retained this paradigm, using a compiler to sidestep the inefficiencies of virtual DOM and the complexities of lifecycle methods.
In Svelte 5, component lifecycles are back, react-hooks style.
In React, hooks are an abstraction that allows developers to avoid writing all the stateful code associated with component lifecycle methods. Modern React tutorials universally recommend using hooks instead, which rely on the framework invisibly synchronizing state with the render tree.
While this does result in cleaner code, it also requires developers to tread carefully to avoid breaking the assumptions surrounding hooks. Just try accessing state in a
setTimeout
and you'll see what I mean.Svelte 4 had a few gotchas like this — for example, async code that interacts with a component's DOM elements has to keep track of whether the component is unmounted. This is pretty similar to the kind of pattern you'd see in old React components that relied on lifecycle methods.
It seems to me that Svelte 5 has gone the React 16 route by adding implicit state related to component lifecycles in order to coordinate state changes and effects.
For example, here is an excerpt from the documentation for $effect:
You can place $effect anywhere, not just at the top level of a component, as long as it is called during component initialization (or while a parent effect is active). It is then tied to the lifecycle of the component (or parent effect) and will therefore destroy itself when the component unmounts (or the parent effect is destroyed).
That's very complex! In order to use
$effect
... effectively (sorry), developers have to understand how state changes are tracked. The documentation for component lifecycles claims:In Svelte 5, the component lifecycle consists of only two parts: Its creation and its destruction. Everything in-between — when certain state is updated — is not related to the component as a whole; only the parts that need to react to the state change are notified. This is because under the hood the smallest unit of change is actually not a component, it’s the (render) effects that the component sets up upon component initialization. Consequently, there’s no such thing as a “before update”/"after update” hook.
But then goes on to introduce the idea of
tick
in conjunction with$effect.pre
. This section explains that "tick
returns a promise that resolves once any pending state changes have been applied, or in the next microtask if there are none."I'm sure there's some mental model that justifies this, but I don't think the claim that a component's lifecycle is only comprised of mount/unmount is really helpful when an addendum about state changes has to come right afterward.
The place where this really bit me, and which is the motivation for this blog post, is when state gets coupled to a component's lifecycle, even when the state is passed to another function that doesn't know anything about svelte.
In my application, I manage modal dialogs by storing the component I want to render alongside its props in a store and rendering it in the
layout.svelte
of my application. This store is also synchronized with browser history so that the back button works to close them. Sometimes, it's useful to pass a callback to one of these modals, binding caller-specific functionality to the child component:javascript const {value} = $props() const callback = () => console.log(value) const openModal = () => pushModal(MyModal, {callback})
This is a fundamental pattern in javascript. Passing a callback is just one of those things you do.
Unfortunately, if the above code lives in a modal dialog itself, the caller component gets unmounted before the callback gets called. In Svelte 4, this worked fine, but in Svelte 5
value
gets updated toundefined
when the component gets unmounted. Here's a minimal reproduction.This is only one example, but it seems clear to me that any prop that is closed over by a callback function that lives longer than its component will be undefined when I want to use it — with no reassignment existing in lexical scope. It seems that the reason this happens is that the props "belong" to the parent component, and are accessed via getters so that the parent can revoke access when it unmounts.
I don't know why this is necessary, but I assume there's a good engineering reason for it. The problem is, this just isn't how javascript works. Svelte is essentially attempting to re-invent garbage collection around component lifecycles, which breaks the assumption every javascript developer has that variables don't simply disappear without an explicit reassignment. It should be safe to pass stuff around and let the garbage collector do its job.
Conclusion
Easy things are nice, but as Rich Hickey says, easy things are not always simple. And like Joel Spolsky, I don't like being surprised. Svelte has always been full of magic, but with the latest release I think the cognitive overhead of reciting incantations has finally outweighed the power it confers.
My point in this post is not to dunk on the Svelte team. I know lots of people like Svelte 5 (and react hooks). The point I'm trying to make is that there is a tradeoff between doing things on the user's behalf, and giving the user agency. Good software is built on understanding, not cleverness.
I also think this is an important lesson to remember as AI-assisted coding becomes increasingly popular. Don't choose tools that alienate you from your work. Choose tools that leverage the wisdom you've already accumulated, and which help you to cultivate a deeper understanding of the discipline.
Thank you to Rich Harris and team for many years of pleasant development. I hope that (if you read this) it's not so full of inaccuracies as to be unhelpful as user feedback.
-
@ 21335073:a244b1ad
2025-03-18 20:47:50Warning: This piece contains a conversation about difficult topics. Please proceed with caution.
TL;DR please educate your children about online safety.
Julian Assange wrote in his 2012 book Cypherpunks, “This book is not a manifesto. There isn’t time for that. This book is a warning.” I read it a few times over the past summer. Those opening lines definitely stood out to me. I wish we had listened back then. He saw something about the internet that few had the ability to see. There are some individuals who are so close to a topic that when they speak, it’s difficult for others who aren’t steeped in it to visualize what they’re talking about. I didn’t read the book until more recently. If I had read it when it came out, it probably would have sounded like an unknown foreign language to me. Today it makes more sense.
This isn’t a manifesto. This isn’t a book. There is no time for that. It’s a warning and a possible solution from a desperate and determined survivor advocate who has been pulling and unraveling a thread for a few years. At times, I feel too close to this topic to make any sense trying to convey my pathway to my conclusions or thoughts to the general public. My hope is that if nothing else, I can convey my sense of urgency while writing this. This piece is a watchman’s warning.
When a child steps online, they are walking into a new world. A new reality. When you hand a child the internet, you are handing them possibilities—good, bad, and ugly. This is a conversation about lowering the potential of negative outcomes of stepping into that new world and how I came to these conclusions. I constantly compare the internet to the road. You wouldn’t let a young child run out into the road with no guidance or safety precautions. When you hand a child the internet without any type of guidance or safety measures, you are allowing them to play in rush hour, oncoming traffic. “Look left, look right for cars before crossing.” We almost all have been taught that as children. What are we taught as humans about safety before stepping into a completely different reality like the internet? Very little.
I could never really figure out why many folks in tech, privacy rights activists, and hackers seemed so cold to me while talking about online child sexual exploitation. I always figured that as a survivor advocate for those affected by these crimes, that specific, skilled group of individuals would be very welcoming and easy to talk to about such serious topics. I actually had one hacker laugh in my face when I brought it up while I was looking for answers. I thought maybe this individual thought I was accusing them of something I wasn’t, so I felt bad for asking. I was constantly extremely disappointed and would ask myself, “Why don’t they care? What could I say to make them care more? What could I say to make them understand the crisis and the level of suffering that happens as a result of the problem?”
I have been serving minor survivors of online child sexual exploitation for years. My first case serving a survivor of this specific crime was in 2018—a 13-year-old girl sexually exploited by a serial predator on Snapchat. That was my first glimpse into this side of the internet. I won a national award for serving the minor survivors of Twitter in 2023, but I had been working on that specific project for a few years. I was nominated by a lawyer representing two survivors in a legal battle against the platform. I’ve never really spoken about this before, but at the time it was a choice for me between fighting Snapchat or Twitter. I chose Twitter—or rather, Twitter chose me. I heard about the story of John Doe #1 and John Doe #2, and I was so unbelievably broken over it that I went to war for multiple years. I was and still am royally pissed about that case. As far as I was concerned, the John Doe #1 case proved that whatever was going on with corporate tech social media was so out of control that I didn’t have time to wait, so I got to work. It was reading the messages that John Doe #1 sent to Twitter begging them to remove his sexual exploitation that broke me. He was a child begging adults to do something. A passion for justice and protecting kids makes you do wild things. I was desperate to find answers about what happened and searched for solutions. In the end, the platform Twitter was purchased. During the acquisition, I just asked Mr. Musk nicely to prioritize the issue of detection and removal of child sexual exploitation without violating digital privacy rights or eroding end-to-end encryption. Elon thanked me multiple times during the acquisition, made some changes, and I was thanked by others on the survivors’ side as well.
I still feel that even with the progress made, I really just scratched the surface with Twitter, now X. I left that passion project when I did for a few reasons. I wanted to give new leadership time to tackle the issue. Elon Musk made big promises that I knew would take a while to fulfill, but mostly I had been watching global legislation transpire around the issue, and frankly, the governments are willing to go much further with X and the rest of corporate tech than I ever would. My work begging Twitter to make changes with easier reporting of content, detection, and removal of child sexual exploitation material—without violating privacy rights or eroding end-to-end encryption—and advocating for the minor survivors of the platform went as far as my principles would have allowed. I’m grateful for that experience. I was still left with a nagging question: “How did things get so bad with Twitter where the John Doe #1 and John Doe #2 case was able to happen in the first place?” I decided to keep looking for answers. I decided to keep pulling the thread.
I never worked for Twitter. This is often confusing for folks. I will say that despite being disappointed in the platform’s leadership at times, I loved Twitter. I saw and still see its value. I definitely love the survivors of the platform, but I also loved the platform. I was a champion of the platform’s ability to give folks from virtually around the globe an opportunity to speak and be heard.
I want to be clear that John Doe #1 really is my why. He is the inspiration. I am writing this because of him. He represents so many globally, and I’m still inspired by his bravery. One child’s voice begging adults to do something—I’m an adult, I heard him. I’d go to war a thousand more lifetimes for that young man, and I don’t even know his name. Fighting has been personally dark at times; I’m not even going to try to sugarcoat it, but it has been worth it.
The data surrounding the very real crime of online child sexual exploitation is available to the public online at any time for anyone to see. I’d encourage you to go look at the data for yourself. I believe in encouraging folks to check multiple sources so that you understand the full picture. If you are uncomfortable just searching around the internet for information about this topic, use the terms “CSAM,” “CSEM,” “SG-CSEM,” or “AI Generated CSAM.” The numbers don’t lie—it’s a nightmare that’s out of control. It’s a big business. The demand is high, and unfortunately, business is booming. Organizations collect the data, tech companies often post their data, governments report frequently, and the corporate press has covered a decent portion of the conversation, so I’m sure you can find a source that you trust.
Technology is changing rapidly, which is great for innovation as a whole but horrible for the crime of online child sexual exploitation. Those wishing to exploit the vulnerable seem to be adapting to each technological change with ease. The governments are so far behind with tackling these issues that as I’m typing this, it’s borderline irrelevant to even include them while speaking about the crime or potential solutions. Technology is changing too rapidly, and their old, broken systems can’t even dare to keep up. Think of it like the governments’ “War on Drugs.” Drugs won. In this case as well, the governments are not winning. The governments are talking about maybe having a meeting on potentially maybe having legislation around the crimes. The time to have that meeting would have been many years ago. I’m not advocating for governments to legislate our way out of this. I’m on the side of educating and innovating our way out of this.
I have been clear while advocating for the minor survivors of corporate tech platforms that I would not advocate for any solution to the crime that would violate digital privacy rights or erode end-to-end encryption. That has been a personal moral position that I was unwilling to budge on. This is an extremely unpopular and borderline nonexistent position in the anti-human trafficking movement and online child protection space. I’m often fearful that I’m wrong about this. I have always thought that a better pathway forward would have been to incentivize innovation for detection and removal of content. I had no previous exposure to privacy rights activists or Cypherpunks—actually, I came to that conclusion by listening to the voices of MENA region political dissidents and human rights activists. After developing relationships with human rights activists from around the globe, I realized how important privacy rights and encryption are for those who need it most globally. I was simply unwilling to give more power, control, and opportunities for mass surveillance to big abusers like governments wishing to enslave entire nations and untrustworthy corporate tech companies to potentially end some portion of abuses online. On top of all of it, it has been clear to me for years that all potential solutions outside of violating digital privacy rights to detect and remove child sexual exploitation online have not yet been explored aggressively. I’ve been disappointed that there hasn’t been more of a conversation around preventing the crime from happening in the first place.
What has been tried is mass surveillance. In China, they are currently under mass surveillance both online and offline, and their behaviors are attached to a social credit score. Unfortunately, even on state-run and controlled social media platforms, they still have child sexual exploitation and abuse imagery pop up along with other crimes and human rights violations. They also have a thriving black market online due to the oppression from the state. In other words, even an entire loss of freedom and privacy cannot end the sexual exploitation of children online. It’s been tried. There is no reason to repeat this method.
It took me an embarrassingly long time to figure out why I always felt a slight coldness from those in tech and privacy-minded individuals about the topic of child sexual exploitation online. I didn’t have any clue about the “Four Horsemen of the Infocalypse.” This is a term coined by Timothy C. May in 1988. I would have been a child myself when he first said it. I actually laughed at myself when I heard the phrase for the first time. I finally got it. The Cypherpunks weren’t wrong about that topic. They were so spot on that it is borderline uncomfortable. I was mad at first that they knew that early during the birth of the internet that this issue would arise and didn’t address it. Then I got over it because I realized that it wasn’t their job. Their job was—is—to write code. Their job wasn’t to be involved and loving parents or survivor advocates. Their job wasn’t to educate children on internet safety or raise awareness; their job was to write code.
They knew that child sexual abuse material would be shared on the internet. They said what would happen—not in a gleeful way, but a prediction. Then it happened.
I equate it now to a concrete company laying down a road. As you’re pouring the concrete, you can say to yourself, “A terrorist might travel down this road to go kill many, and on the flip side, a beautiful child can be born in an ambulance on this road.” Who or what travels down the road is not their responsibility—they are just supposed to lay the concrete. I’d never go to a concrete pourer and ask them to solve terrorism that travels down roads. Under the current system, law enforcement should stop terrorists before they even make it to the road. The solution to this specific problem is not to treat everyone on the road like a terrorist or to not build the road.
So I understand the perceived coldness from those in tech. Not only was it not their job, but bringing up the topic was seen as the equivalent of asking a free person if they wanted to discuss one of the four topics—child abusers, terrorists, drug dealers, intellectual property pirates, etc.—that would usher in digital authoritarianism for all who are online globally.
Privacy rights advocates and groups have put up a good fight. They stood by their principles. Unfortunately, when it comes to corporate tech, I believe that the issue of privacy is almost a complete lost cause at this point. It’s still worth pushing back, but ultimately, it is a losing battle—a ticking time bomb.
I do think that corporate tech providers could have slowed down the inevitable loss of privacy at the hands of the state by prioritizing the detection and removal of CSAM when they all started online. I believe it would have bought some time, fewer would have been traumatized by that specific crime, and I do believe that it could have slowed down the demand for content. If I think too much about that, I’ll go insane, so I try to push the “if maybes” aside, but never knowing if it could have been handled differently will forever haunt me. At night when it’s quiet, I wonder what I would have done differently if given the opportunity. I’ll probably never know how much corporate tech knew and ignored in the hopes that it would go away while the problem continued to get worse. They had different priorities. The most voiceless and vulnerable exploited on corporate tech never had much of a voice, so corporate tech providers didn’t receive very much pushback.
Now I’m about to say something really wild, and you can call me whatever you want to call me, but I’m going to say what I believe to be true. I believe that the governments are either so incompetent that they allowed the proliferation of CSAM online, or they knowingly allowed the problem to fester long enough to have an excuse to violate privacy rights and erode end-to-end encryption. The US government could have seized the corporate tech providers over CSAM, but I believe that they were so useful as a propaganda arm for the regimes that they allowed them to continue virtually unscathed.
That season is done now, and the governments are making the issue a priority. It will come at a high cost. Privacy on corporate tech providers is virtually done as I’m typing this. It feels like a death rattle. I’m not particularly sure that we had much digital privacy to begin with, but the illusion of a veil of privacy feels gone.
To make matters slightly more complex, it would be hard to convince me that once AI really gets going, digital privacy will exist at all.
I believe that there should be a conversation shift to preserving freedoms and human rights in a post-privacy society.
I don’t want to get locked up because AI predicted a nasty post online from me about the government. I’m not a doomer about AI—I’m just going to roll with it personally. I’m looking forward to the positive changes that will be brought forth by AI. I see it as inevitable. A bit of privacy was helpful while it lasted. Please keep fighting to preserve what is left of privacy either way because I could be wrong about all of this.
On the topic of AI, the addition of AI to the horrific crime of child sexual abuse material and child sexual exploitation in multiple ways so far has been devastating. It’s currently out of control. The genie is out of the bottle. I am hopeful that innovation will get us humans out of this, but I’m not sure how or how long it will take. We must be extremely cautious around AI legislation. It should not be illegal to innovate even if some bad comes with the good. I don’t trust that the governments are equipped to decide the best pathway forward for AI. Source: the entire history of the government.
I have been personally negatively impacted by AI-generated content. Every few days, I get another alert that I’m featured again in what’s called “deep fake pornography” without my consent. I’m not happy about it, but what pains me the most is the thought that for a period of time down the road, many globally will experience what myself and others are experiencing now by being digitally sexually abused in this way. If you have ever had your picture taken and posted online, you are also at risk of being exploited in this way. Your child’s image can be used as well, unfortunately, and this is just the beginning of this particular nightmare. It will move to more realistic interpretations of sexual behaviors as technology improves. I have no brave words of wisdom about how to deal with that emotionally. I do have hope that innovation will save the day around this specific issue. I’m nervous that everyone online will have to ID verify due to this issue. I see that as one possible outcome that could help to prevent one problem but inadvertently cause more problems, especially for those living under authoritarian regimes or anyone who needs to remain anonymous online. A zero-knowledge proof (ZKP) would probably be the best solution to these issues. There are some survivors of violence and/or sexual trauma who need to remain anonymous online for various reasons. There are survivor stories available online of those who have been abused in this way. I’d encourage you seek out and listen to their stories.
There have been periods of time recently where I hesitate to say anything at all because more than likely AI will cover most of my concerns about education, awareness, prevention, detection, and removal of child sexual exploitation online, etc.
Unfortunately, some of the most pressing issues we’ve seen online over the last few years come in the form of “sextortion.” Self-generated child sexual exploitation (SG-CSEM) numbers are continuing to be terrifying. I’d strongly encourage that you look into sextortion data. AI + sextortion is also a huge concern. The perpetrators are using the non-sexually explicit images of children and putting their likeness on AI-generated child sexual exploitation content and extorting money, more imagery, or both from minors online. It’s like a million nightmares wrapped into one. The wild part is that these issues will only get more pervasive because technology is harnessed to perpetuate horror at a scale unimaginable to a human mind.
Even if you banned phones and the internet or tried to prevent children from accessing the internet, it wouldn’t solve it. Child sexual exploitation will still be with us until as a society we start to prevent the crime before it happens. That is the only human way out right now.
There is no reset button on the internet, but if I could go back, I’d tell survivor advocates to heed the warnings of the early internet builders and to start education and awareness campaigns designed to prevent as much online child sexual exploitation as possible. The internet and technology moved quickly, and I don’t believe that society ever really caught up. We live in a world where a child can be groomed by a predator in their own home while sitting on a couch next to their parents watching TV. We weren’t ready as a species to tackle the fast-paced algorithms and dangers online. It happened too quickly for parents to catch up. How can you parent for the ever-changing digital world unless you are constantly aware of the dangers?
I don’t think that the internet is inherently bad. I believe that it can be a powerful tool for freedom and resistance. I’ve spoken a lot about the bad online, but there is beauty as well. We often discuss how victims and survivors are abused online; we rarely discuss the fact that countless survivors around the globe have been able to share their experiences, strength, hope, as well as provide resources to the vulnerable. I do question if giving any government or tech company access to censorship, surveillance, etc., online in the name of serving survivors might not actually impact a portion of survivors negatively. There are a fair amount of survivors with powerful abusers protected by governments and the corporate press. If a survivor cannot speak to the press about their abuse, the only place they can go is online, directly or indirectly through an independent journalist who also risks being censored. This scenario isn’t hard to imagine—it already happened in China. During #MeToo, a survivor in China wanted to post their story. The government censored the post, so the survivor put their story on the blockchain. I’m excited that the survivor was creative and brave, but it’s terrifying to think that we live in a world where that situation is a necessity.
I believe that the future for many survivors sharing their stories globally will be on completely censorship-resistant and decentralized protocols. This thought in particular gives me hope. When we listen to the experiences of a diverse group of survivors, we can start to understand potential solutions to preventing the crimes from happening in the first place.
My heart is broken over the gut-wrenching stories of survivors sexually exploited online. Every time I hear the story of a survivor, I do think to myself quietly, “What could have prevented this from happening in the first place?” My heart is with survivors.
My head, on the other hand, is full of the understanding that the internet should remain free. The free flow of information should not be stopped. My mind is with the innocent citizens around the globe that deserve freedom both online and offline.
The problem is that governments don’t only want to censor illegal content that violates human rights—they create legislation that is so broad that it can impact speech and privacy of all. “Don’t you care about the kids?” Yes, I do. I do so much that I’m invested in finding solutions. I also care about all citizens around the globe that deserve an opportunity to live free from a mass surveillance society. If terrorism happens online, I should not be punished by losing my freedom. If drugs are sold online, I should not be punished. I’m not an abuser, I’m not a terrorist, and I don’t engage in illegal behaviors. I refuse to lose freedom because of others’ bad behaviors online.
I want to be clear that on a long enough timeline, the governments will decide that they can be better parents/caregivers than you can if something isn’t done to stop minors from being sexually exploited online. The price will be a complete loss of anonymity, privacy, free speech, and freedom of religion online. I find it rather insulting that governments think they’re better equipped to raise children than parents and caretakers.
So we can’t go backwards—all that we can do is go forward. Those who want to have freedom will find technology to facilitate their liberation. This will lead many over time to decentralized and open protocols. So as far as I’m concerned, this does solve a few of my worries—those who need, want, and deserve to speak freely online will have the opportunity in most countries—but what about online child sexual exploitation?
When I popped up around the decentralized space, I was met with the fear of censorship. I’m not here to censor you. I don’t write code. I couldn’t censor anyone or any piece of content even if I wanted to across the internet, no matter how depraved. I don’t have the skills to do that.
I’m here to start a conversation. Freedom comes at a cost. You must always fight for and protect your freedom. I can’t speak about protecting yourself from all of the Four Horsemen because I simply don’t know the topics well enough, but I can speak about this one topic.
If there was a shortcut to ending online child sexual exploitation, I would have found it by now. There isn’t one right now. I believe that education is the only pathway forward to preventing the crime of online child sexual exploitation for future generations.
I propose a yearly education course for every child of all school ages, taught as a standard part of the curriculum. Ideally, parents/caregivers would be involved in the education/learning process.
Course: - The creation of the internet and computers - The fight for cryptography - The tech supply chain from the ground up (example: human rights violations in the supply chain) - Corporate tech - Freedom tech - Data privacy - Digital privacy rights - AI (history-current) - Online safety (predators, scams, catfishing, extortion) - Bitcoin - Laws - How to deal with online hate and harassment - Information on who to contact if you are being abused online or offline - Algorithms - How to seek out the truth about news, etc., online
The parents/caregivers, homeschoolers, unschoolers, and those working to create decentralized parallel societies have been an inspiration while writing this, but my hope is that all children would learn this course, even in government ran schools. Ideally, parents would teach this to their own children.
The decentralized space doesn’t want child sexual exploitation to thrive. Here’s the deal: there has to be a strong prevention effort in order to protect the next generation. The internet isn’t going anywhere, predators aren’t going anywhere, and I’m not down to let anyone have the opportunity to prove that there is a need for more government. I don’t believe that the government should act as parents. The governments have had a chance to attempt to stop online child sexual exploitation, and they didn’t do it. Can we try a different pathway forward?
I’d like to put myself out of a job. I don’t want to ever hear another story like John Doe #1 ever again. This will require work. I’ve often called online child sexual exploitation the lynchpin for the internet. It’s time to arm generations of children with knowledge and tools. I can’t do this alone.
Individuals have fought so that I could have freedom online. I want to fight to protect it. I don’t want child predators to give the government any opportunity to take away freedom. Decentralized spaces are as close to a reset as we’ll get with the opportunity to do it right from the start. Start the youth off correctly by preventing potential hazards to the best of your ability.
The good news is anyone can work on this! I’d encourage you to take it and run with it. I added the additional education about the history of the internet to make the course more educational and fun. Instead of cleaning up generations of destroyed lives due to online sexual exploitation, perhaps this could inspire generations of those who will build our futures. Perhaps if the youth is armed with knowledge, they can create more tools to prevent the crime.
This one solution that I’m suggesting can be done on an individual level or on a larger scale. It should be adjusted depending on age, learning style, etc. It should be fun and playful.
This solution does not address abuse in the home or some of the root causes of offline child sexual exploitation. My hope is that it could lead to some survivors experiencing abuse in the home an opportunity to disclose with a trusted adult. The purpose for this solution is to prevent the crime of online child sexual exploitation before it occurs and to arm the youth with the tools to contact safe adults if and when it happens.
In closing, I went to hell a few times so that you didn’t have to. I spoke to the mothers of survivors of minors sexually exploited online—their tears could fill rivers. I’ve spoken with political dissidents who yearned to be free from authoritarian surveillance states. The only balance that I’ve found is freedom online for citizens around the globe and prevention from the dangers of that for the youth. Don’t slow down innovation and freedom. Educate, prepare, adapt, and look for solutions.
I’m not perfect and I’m sure that there are errors in this piece. I hope that you find them and it starts a conversation.
-
@ ec42c765:328c0600
2025-02-05 23:16:35てすと
nostr:nevent1qqst3uqlls4yr9vys4dza2sgjle3ly37trck7jgdmtr23uuz52usjrqqqnjgr
nostr:nevent1qqsdvchy5d27zt3z05rr3q6vvmzgslslxwu0p4dfkvxwhmvxldn9djguvagp2
test
てs
-
@ 9e69e420:d12360c2
2025-02-17 17:12:01President Trump has intensified immigration enforcement, likening it to a wartime effort. Despite pouring resources into the U.S. Immigration and Customs Enforcement (ICE), arrest numbers are declining and falling short of goals. ICE fell from about 800 daily arrests in late January to fewer than 600 in early February.
Critics argue the administration is merely showcasing efforts with ineffectiveness, while Trump seeks billions more in funding to support his deportation agenda. Increased involvement from various federal agencies is intended to assist ICE, but many lack specific immigration training.
Challenges persist, as fewer immigrants are available for quick deportation due to a decline in illegal crossings. Local sheriffs are also pressured by rising demands to accommodate immigrants, which may strain resources further.
-
@ ec42c765:328c0600
2025-02-05 22:05:55カスタム絵文字とは
任意のオリジナル画像を絵文字のように文中に挿入できる機能です。
また、リアクション(Twitterの いいね のような機能)にもカスタム絵文字を使えます。
カスタム絵文字の対応状況(2025/02/06)
カスタム絵文字を使うためにはカスタム絵文字に対応したクライアントを使う必要があります。
※表は一例です。クライアントは他にもたくさんあります。
使っているクライアントが対応していない場合は、クライアントを変更する、対応するまで待つ、開発者に要望を送る(または自分で実装する)などしましょう。
対応クライアント
ここではnostterを使って説明していきます。
準備
カスタム絵文字を使うための準備です。
- Nostrエクステンション(NIP-07)を導入する
- 使いたいカスタム絵文字をリストに登録する
Nostrエクステンション(NIP-07)を導入する
Nostrエクステンションは使いたいカスタム絵文字を登録する時に必要になります。
また、環境(パソコン、iPhone、androidなど)によって導入方法が違います。
Nostrエクステンションを導入する端末は、実際にNostrを閲覧する端末と違っても構いません(リスト登録はPC、Nostr閲覧はiPhoneなど)。
Nostrエクステンション(NIP-07)の導入方法は以下のページを参照してください。
ログイン拡張機能 (NIP-07)を使ってみよう | Welcome to Nostr! ~ Nostrをはじめよう! ~
少し面倒ですが、これを導入しておくとNostr上の様々な場面で役立つのでより快適になります。
使いたいカスタム絵文字をリストに登録する
以下のサイトで行います。
右上のGet startedからNostrエクステンションでログインしてください。
例として以下のカスタム絵文字を導入してみます。
実際より絵文字が少なく表示されることがありますが、古い状態のデータを取得してしまっているためです。その場合はブラウザの更新ボタンを押してください。
- 右側のOptionsからBookmarkを選択
これでカスタム絵文字を使用するためのリストに登録できます。
カスタム絵文字を使用する
例としてブラウザから使えるクライアント nostter から使用してみます。
nostterにNostrエクステンションでログイン、もしくは秘密鍵を入れてログインしてください。
文章中に使用
- 投稿ボタンを押して投稿ウィンドウを表示
- 顔😀のボタンを押し、絵文字ウィンドウを表示
- *タブを押し、カスタム絵文字一覧を表示
- カスタム絵文字を選択
- : 記号に挟まれたアルファベットのショートコードとして挿入される
この状態で投稿するとカスタム絵文字として表示されます。
カスタム絵文字対応クライアントを使っている他ユーザーにもカスタム絵文字として表示されます。
対応していないクライアントの場合、ショートコードのまま表示されます。
ショートコードを直接入力することでカスタム絵文字の候補が表示されるのでそこから選択することもできます。
リアクションに使用
- 任意の投稿の顔😀のボタンを押し、絵文字ウィンドウを表示
- *タブを押し、カスタム絵文字一覧を表示
- カスタム絵文字を選択
カスタム絵文字リアクションを送ることができます。
カスタム絵文字を探す
先述したemojitoからカスタム絵文字を探せます。
例えば任意のユーザーのページ emojito ロクヨウ から探したり、 emojito Browse all からnostr全体で最近作成、更新された絵文字を見たりできます。
また、以下のリンクは日本語圏ユーザーが作ったカスタム絵文字を集めたリストです(2025/02/06)
※漏れがあるかもしれません
各絵文字セットにあるOpen in emojitoのリンクからemojitoに飛び、使用リストに追加できます。
以上です。
次:Nostrのカスタム絵文字の作り方
Yakihonneリンク Nostrのカスタム絵文字の作り方
Nostrリンク nostr:naddr1qqxnzdesxuunzv358ycrgveeqgswcsk8v4qck0deepdtluag3a9rh0jh2d0wh0w9g53qg8a9x2xqvqqrqsqqqa28r5psx3
仕様
-
@ fd208ee8:0fd927c1
2025-02-15 07:02:08E-cash are coupons or tokens for Bitcoin, or Bitcoin debt notes that the mint issues. The e-cash states, essentially, "IoU 2900 sats".
They're redeemable for Bitcoin on Lightning (hard money), and therefore can be used as cash (softer money), so long as the mint has a good reputation. That means that they're less fungible than Lightning because the e-cash from one mint can be more or less valuable than the e-cash from another. If a mint is buggy, offline, or disappears, then the e-cash is unreedemable.
It also means that e-cash is more anonymous than Lightning, and that the sender and receiver's wallets don't need to be online, to transact. Nutzaps now add the possibility of parking transactions one level farther out, on a relay. The same relays that cannot keep npub profiles and follow lists consistent will now do monetary transactions.
What we then have is * a transaction on a relay that triggers * a transaction on a mint that triggers * a transaction on Lightning that triggers * a transaction on Bitcoin.
Which means that every relay that stores the nuts is part of a wildcat banking system. Which is fine, but relay operators should consider whether they wish to carry the associated risks and liabilities. They should also be aware that they should implement the appropriate features in their relay, such as expiration tags (nuts rot after 2 weeks), and to make sure that only expired nuts are deleted.
There will be plenty of specialized relays for this, so don't feel pressured to join in, and research the topic carefully, for yourself.
https://github.com/nostr-protocol/nips/blob/master/60.md
-
@ ec42c765:328c0600
2025-02-05 20:30:46カスタム絵文字とは
任意のオリジナル画像を絵文字のように文中に挿入できる機能です。
また、リアクション(Twitterの いいね のような機能)にもカスタム絵文字を使えます。
カスタム絵文字の対応状況(2024/02/05)
カスタム絵文字を使うためにはカスタム絵文字に対応したクライアントを使う必要があります。
※表は一例です。クライアントは他にもたくさんあります。
使っているクライアントが対応していない場合は、クライアントを変更する、対応するまで待つ、開発者に要望を送る(または自分で実装する)などしましょう。
対応クライアント
ここではnostterを使って説明していきます。
準備
カスタム絵文字を使うための準備です。
- Nostrエクステンション(NIP-07)を導入する
- 使いたいカスタム絵文字をリストに登録する
Nostrエクステンション(NIP-07)を導入する
Nostrエクステンションは使いたいカスタム絵文字を登録する時に必要になります。
また、環境(パソコン、iPhone、androidなど)によって導入方法が違います。
Nostrエクステンションを導入する端末は、実際にNostrを閲覧する端末と違っても構いません(リスト登録はPC、Nostr閲覧はiPhoneなど)。
Nostrエクステンション(NIP-07)の導入方法は以下のページを参照してください。
ログイン拡張機能 (NIP-07)を使ってみよう | Welcome to Nostr! ~ Nostrをはじめよう! ~
少し面倒ですが、これを導入しておくとNostr上の様々な場面で役立つのでより快適になります。
使いたいカスタム絵文字をリストに登録する
以下のサイトで行います。
右上のGet startedからNostrエクステンションでログインしてください。
例として以下のカスタム絵文字を導入してみます。
実際より絵文字が少なく表示されることがありますが、古い状態のデータを取得してしまっているためです。その場合はブラウザの更新ボタンを押してください。
- 右側のOptionsからBookmarkを選択
これでカスタム絵文字を使用するためのリストに登録できます。
カスタム絵文字を使用する
例としてブラウザから使えるクライアント nostter から使用してみます。
nostterにNostrエクステンションでログイン、もしくは秘密鍵を入れてログインしてください。
文章中に使用
- 投稿ボタンを押して投稿ウィンドウを表示
- 顔😀のボタンを押し、絵文字ウィンドウを表示
- *タブを押し、カスタム絵文字一覧を表示
- カスタム絵文字を選択
- : 記号に挟まれたアルファベットのショートコードとして挿入される
この状態で投稿するとカスタム絵文字として表示されます。
カスタム絵文字対応クライアントを使っている他ユーザーにもカスタム絵文字として表示されます。
対応していないクライアントの場合、ショートコードのまま表示されます。
ショートコードを直接入力することでカスタム絵文字の候補が表示されるのでそこから選択することもできます。
リアクションに使用
- 任意の投稿の顔😀のボタンを押し、絵文字ウィンドウを表示
- *タブを押し、カスタム絵文字一覧を表示
- カスタム絵文字を選択
カスタム絵文字リアクションを送ることができます。
カスタム絵文字を探す
先述したemojitoからカスタム絵文字を探せます。
例えば任意のユーザーのページ emojito ロクヨウ から探したり、 emojito Browse all からnostr全体で最近作成、更新された絵文字を見たりできます。
また、以下のリンクは日本語圏ユーザーが作ったカスタム絵文字を集めたリストです(2024/06/30)
※漏れがあるかもしれません
各絵文字セットにあるOpen in emojitoのリンクからemojitoに飛び、使用リストに追加できます。
以上です。
次:Nostrのカスタム絵文字の作り方
Yakihonneリンク Nostrのカスタム絵文字の作り方
Nostrリンク nostr:naddr1qqxnzdesxuunzv358ycrgveeqgswcsk8v4qck0deepdtluag3a9rh0jh2d0wh0w9g53qg8a9x2xqvqqrqsqqqa28r5psx3
仕様
-
@ 59cb0748:9602464b
2025-01-01 06:15:09Nostrでお世話になっている方も、お世話になってない方も、こんにちは!
タコ頭大吉です!
NIP-23を使った初めての投稿です。
今回は、私がここ数ヶ月中にデザインをした三種類のビタキセケースの紹介記事になります!!
ビタキセを買ったもののあまり自分の好みに合う外観や仕様のケースがなく、いくつかプロトタイプを作りそれなりに時間をかけて考えたケース達です。
これら3シリーズに関しては、FDMタイプの3Dプリンタの精度、耐久性、出力後の作業性を考慮して一つのパーツで完結することに拘って設計をしました。
一定以上の充填率でプリントをすればそれなりに丈夫なはずです。
また、基本的に放熱性と保護性を両立できるように設計をしたつもります。
それぞれのモデルについて簡単に紹介をさせていただきますので、よろしければ各リポジトリに付属のREADMEを読んでいただいて自作、フィードバックをいただけましたら幸いです。
それでは、簡単に各モデルの紹介をさせていたきます。
AirLiftFrame
最初に作ったモデルです! 少し大きいのが難点ですが、分厚めのフレームをベースとし基盤周辺をあえて囲わない設計により、保護性と放熱を阻害しない事の両立を狙っています。
TwinAirLiftFrame
ビタキセを買い増ししたことにより、複数台をカッコよく運用したいという需要が自分の中に出てきたので、AirLiftFrameを2つくっつけたら良いのではと言うごくごく単純な発想でつくり始めたケースです。 しかし、ただ横並びにしただけでは廃熱が干渉するだけではなく、DCジャックやUSBポートへのアクセスが阻害されるという問題にすぐに気がつきました。 そこで、WebUI上でディスプレイの表示を上下反転出来ることに注目し、2台を上下逆向きに取り付ける事でそれらの問題を解決しました!
VoronoiShell
AirLiftFrameシリーズのサイズを小型化する事から始めたプロジョクトです。 縦横の寸法の削減だけではなく、厚みを薄くつくリたいという希望がありました。 所が単純に薄くすると、持った時に発熱する背面パーツに手が触れてしまったり、落下などでぶつかった際に背面パーツが破損する懸念がありました。 そこで、(当初は付けたくはなかった)背面保護用のグリルをデザインする必要が出てきました。 初めは多角形でしたがあまりにもダサく、調べている内にVoronoi柄という有機的なパターンに行き付き即採用しました。 結果、ビタキセを取り付けると柄が見えなくなるのが勿体無いぐらい個性的でスタイリッシュなデザインに仕上がりました。
いずれカスタム方法やインサートナットや増設ファンの選定方法等を紹介したいのですが、今回はNIP-23になれるという意図もあるので紹介に留めます! また、他の関連OSハードウェアプロジェクトのケースもデザインできたらと思っております!
今後ともタコ頭をよろしくお願いいたします。
-
@ daa41bed:88f54153
2025-02-09 16:50:04There has been a good bit of discussion on Nostr over the past few days about the merits of zaps as a method of engaging with notes, so after writing a rather lengthy article on the pros of a strategic Bitcoin reserve, I wanted to take some time to chime in on the much more fun topic of digital engagement.
Let's begin by defining a couple of things:
Nostr is a decentralized, censorship-resistance protocol whose current biggest use case is social media (think Twitter/X). Instead of relying on company servers, it relies on relays that anyone can spin up and own their own content. Its use cases are much bigger, though, and this article is hosted on my own relay, using my own Nostr relay as an example.
Zap is a tip or donation denominated in sats (small units of Bitcoin) sent from one user to another. This is generally done directly over the Lightning Network but is increasingly using Cashu tokens. For the sake of this discussion, how you transmit/receive zaps will be irrelevant, so don't worry if you don't know what Lightning or Cashu are.
If we look at how users engage with posts and follows/followers on platforms like Twitter, Facebook, etc., it becomes evident that traditional social media thrives on engagement farming. The more outrageous a post, the more likely it will get a reaction. We see a version of this on more visual social platforms like YouTube and TikTok that use carefully crafted thumbnail images to grab the user's attention to click the video. If you'd like to dive deep into the psychology and science behind social media engagement, let me know, and I'd be happy to follow up with another article.
In this user engagement model, a user is given the option to comment or like the original post, or share it among their followers to increase its signal. They receive no value from engaging with the content aside from the dopamine hit of the original experience or having their comment liked back by whatever influencer they provide value to. Ad revenue flows to the content creator. Clout flows to the content creator. Sales revenue from merch and content placement flows to the content creator. We call this a linear economy -- the idea that resources get created, used up, then thrown away. Users create content and farm as much engagement as possible, then the content is forgotten within a few hours as they move on to the next piece of content to be farmed.
What if there were a simple way to give value back to those who engage with your content? By implementing some value-for-value model -- a circular economy. Enter zaps.
Unlike traditional social media platforms, Nostr does not actively use algorithms to determine what content is popular, nor does it push content created for active user engagement to the top of a user's timeline. Yes, there are "trending" and "most zapped" timelines that users can choose to use as their default, but these use relatively straightforward engagement metrics to rank posts for these timelines.
That is not to say that we may not see clients actively seeking to refine timeline algorithms for specific metrics. Still, the beauty of having an open protocol with media that is controlled solely by its users is that users who begin to see their timeline gamed towards specific algorithms can choose to move to another client, and for those who are more tech-savvy, they can opt to run their own relays or create their own clients with personalized algorithms and web of trust scoring systems.
Zaps enable the means to create a new type of social media economy in which creators can earn for creating content and users can earn by actively engaging with it. Like and reposting content is relatively frictionless and costs nothing but a simple button tap. Zaps provide active engagement because they signal to your followers and those of the content creator that this post has genuine value, quite literally in the form of money—sats.
I have seen some comments on Nostr claiming that removing likes and reactions is for wealthy people who can afford to send zaps and that the majority of people in the US and around the world do not have the time or money to zap because they have better things to spend their money like feeding their families and paying their bills. While at face value, these may seem like valid arguments, they, unfortunately, represent the brainwashed, defeatist attitude that our current economic (and, by extension, social media) systems aim to instill in all of us to continue extracting value from our lives.
Imagine now, if those people dedicating their own time (time = money) to mine pity points on social media would instead spend that time with genuine value creation by posting content that is meaningful to cultural discussions. Imagine if, instead of complaining that their posts get no zaps and going on a tirade about how much of a victim they are, they would empower themselves to take control of their content and give value back to the world; where would that leave us? How much value could be created on a nascent platform such as Nostr, and how quickly could it overtake other platforms?
Other users argue about user experience and that additional friction (i.e., zaps) leads to lower engagement, as proven by decades of studies on user interaction. While the added friction may turn some users away, does that necessarily provide less value? I argue quite the opposite. You haven't made a few sats from zaps with your content? Can't afford to send some sats to a wallet for zapping? How about using the most excellent available resource and spending 10 seconds of your time to leave a comment? Likes and reactions are valueless transactions. Social media's real value derives from providing monetary compensation and actively engaging in a conversation with posts you find interesting or thought-provoking. Remember when humans thrived on conversation and discussion for entertainment instead of simply being an onlooker of someone else's life?
If you've made it this far, my only request is this: try only zapping and commenting as a method of engagement for two weeks. Sure, you may end up liking a post here and there, but be more mindful of how you interact with the world and break yourself from blind instinct. You'll thank me later.
-
@ 97c70a44:ad98e322
2025-02-03 22:25:35Last week, in a bid to understand the LLM hype, I decided to write a trivial nostr-related program in rust via a combination of codebuff (yes, that is a referral link, pls click), aider, and goose.
The result of the experiment was inconclusive, but as a side effect it produced a great case study in converting a NINO into a Real Nostr App.
Introducing Roz
Roz, a friendly notary for nostr events.
To use it, simply publish an event to
relay.damus.io
ornos.lol
, and roz will make note of it. To find out when roz first saw a given event, just ask:curl https://roz.coracle.social/notary/cb429632ae22557d677a11149b2d0ccd72a1cf66ac55da30e3534ed1a492765d
This will return a JSON payload with a
seen
key indicating when roz first saw the event. How (and whether) you use this is up to you!De-NINO-fying roz
Roz is just a proof of concept, so don't rely on it being there forever. And anyway, roz is a NINO, since it provides value to nostr (potentially), but doesn't really do things in a nostr-native way. It also hard-codes its relays, and certainly doesn't use the outbox model or sign events. But that's ok, it's a proof of concept.
A much better way to do this would be to modify roz to properly leverage nostr's capabilities, namely:
- Use nostr-native data formats (i.e., draft a new kind)
- Use relays instead of proprietary servers for data storage
- Leverage nostr identities and signatures to decouple trust from storage, and allow trusted attestations to be discovered
Luckily, this is not hard at all. In fact, I've gone ahead and drafted a PR to the NIPs repo that adds timestamp annotations to NIP 03, as an alternative to OpenTimestamps. The trade-off is that while user attestations are far less reliable than OTS proofs, they're much easier to verify, and can reach a pretty high level of reliability by combining multiple attestation sources with other forms of reputation.
In other words, instead of going nuclear and embedding your attestations into The Time Chain, you can simply ask 5-10 relays or people you trust for their attestations for a given event.
This PR isn't terribly important on its own, but it does remove one small barrier between us and trusted key rotation events (or other types of event that require establishing a verifiable chain of causality).
-
@ ec42c765:328c0600
2024-12-22 19:16:31この記事は前回の内容を把握している人向けに書いています(特にNostrエクステンション(NIP-07)導入)
手順
- 登録する画像を用意する
- 画像をweb上にアップロードする
- 絵文字セットに登録する
1. 登録する画像を用意する
以下のような方法で用意してください。
- 画像編集ソフト等を使って自分で作成する
- 絵文字作成サイトを使う(絵文字ジェネレーター、MEGAMOJI など)
- フリー画像を使う(いらすとや など)
データ量削減
Nostrでは画像をそのまま表示するクライアントが多いので、データ量が大きな画像をそのまま使うとモバイル通信時などに負担がかかります。
データ量を増やさないためにサイズやファイル形式を変更することをおすすめします。
以下は私のおすすめです。 * サイズ:正方形 128×128 ピクセル、長方形 任意の横幅×128 ピクセル * ファイル形式:webp形式(webp変換おすすめサイト toimg) * 単色、単純な画像の場合:png形式(webpにするとむしろサイズが大きくなる)
その他
- 背景透過画像
- ダークモード、ライトモード両方で見やすい色
がおすすめです。
2. 画像をweb上にアップロードする
よく分からなければ emojito からのアップロードで問題ないです。
普段使っている画像アップロード先があるならそれでも構いません。
気になる方はアップロード先を適宜選んでください。既に投稿されたカスタム絵文字の画像に対して
- 削除も差し替えもできない → emojito など
- 削除できるが差し替えはできない → Gyazo、nostrcheck.meなど
- 削除も差し替えもできる → GitHub 、セルフホスティングなど
これらは既にNostr上に投稿されたカスタム絵文字の画像を後から変更できるかどうかを指します。
どの方法でも新しく使われるカスタム絵文字を変更することは可能です。
同一のカスタム絵文字セットに同一のショートコードで別の画像を登録する形で対応できます。3. 絵文字セットに登録する
emojito から登録します。
右上のアイコン → + New emoji set から新規の絵文字セットを作成できます。
① 絵文字セット名を入力
基本的にカスタム絵文字はカスタム絵文字セットを作り、ひとまとまりにして登録します。
一度作った絵文字セットに後から絵文字を追加することもできます。
② 画像をアップロードまたは画像URLを入力
emojitoから画像をアップロードする場合、ファイル名に日本語などの2バイト文字が含まれているとアップロードがエラーになるようです。
その場合はファイル名を適当な英数字などに変更してください。
③ 絵文字のショートコードを入力
ショートコードは絵文字を呼び出す時に使用する場合があります。
他のカスタム絵文字と被っても問題ありませんが選択時に複数表示されて支障が出る可能性があります。
他と被りにくく長くなりすぎないショートコードが良いかもしれません。
ショートコードに使えるのは半角の英数字とアンダーバーのみです。
④ 追加
Add を押してもまだ作成完了にはなりません。
一度に絵文字を複数登録できます。
最後に右上の Save を押すと作成完了です。
画面が切り替わるので、右側の Options から Bookmark を選択するとそのカスタム絵文字セットを自分で使えるようになります。
既存の絵文字セットを編集するには Options から Edit を選択します。
以上です。
仕様
-
@ 0fa80bd3:ea7325de
2025-01-29 05:55:02The land that belongs to the indigenous peoples of Russia has been seized by a gang of killers who have unleashed a war of extermination. They wipe out anyone who refuses to conform to their rules. Those who disagree and stay behind are tortured and killed in prisons and labor camps. Those who flee lose their homeland, dissolve into foreign cultures, and fade away. And those who stand up to protect their people are attacked by the misled and deceived. The deceived die for the unchecked greed of a single dictator—thousands from both sides, people who just wanted to live, raise their kids, and build a future.
Now, they are forced to make an impossible choice: abandon their homeland or die. Some perish on the battlefield, others lose themselves in exile, stripped of their identity, scattered in a world that isn’t theirs.
There’s been endless debate about how to fix this, how to clear the field of the weeds that choke out every new sprout, every attempt at change. But the real problem? We can’t play by their rules. We can’t speak their language or use their weapons. We stand for humanity, and no matter how righteous our cause, we will not multiply suffering. Victory doesn’t come from matching the enemy—it comes from staying ahead, from using tools they haven’t mastered yet. That’s how wars are won.
Our only resource is the will of the people to rewrite the order of things. Historian Timothy Snyder once said that a nation cannot exist without a city. A city is where the most active part of a nation thrives. But the cities are occupied. The streets are watched. Gatherings are impossible. They control the money. They control the mail. They control the media. And any dissent is crushed before it can take root.
So I started asking myself: How do we stop this fragmentation? How do we create a space where people can rebuild their connections when they’re ready? How do we build a self-sustaining network, where everyone contributes and benefits proportionally, while keeping their freedom to leave intact? And more importantly—how do we make it spread, even in occupied territory?
In 2009, something historic happened: the internet got its own money. Thanks to Satoshi Nakamoto, the world took a massive leap forward. Bitcoin and decentralized ledgers shattered the idea that money must be controlled by the state. Now, to move or store value, all you need is an address and a key. A tiny string of text, easy to carry, impossible to seize.
That was the year money broke free. The state lost its grip. Its biggest weapon—physical currency—became irrelevant. Money became purely digital.
The internet was already a sanctuary for information, a place where people could connect and organize. But with Bitcoin, it evolved. Now, value itself could flow freely, beyond the reach of authorities.
Think about it: when seedlings are grown in controlled environments before being planted outside, they get stronger, survive longer, and bear fruit faster. That’s how we handle crops in harsh climates—nurture them until they’re ready for the wild.
Now, picture the internet as that controlled environment for ideas. Bitcoin? It’s the fertile soil that lets them grow. A testing ground for new models of interaction, where concepts can take root before they move into the real world. If nation-states are a battlefield, locked in a brutal war for territory, the internet is boundless. It can absorb any number of ideas, any number of people, and it doesn’t run out of space.
But for this ecosystem to thrive, people need safe ways to communicate, to share ideas, to build something real—without surveillance, without censorship, without the constant fear of being erased.
This is where Nostr comes in.
Nostr—"Notes and Other Stuff Transmitted by Relays"—is more than just a messaging protocol. It’s a new kind of city. One that no dictator can seize, no corporation can own, no government can shut down.
It’s built on decentralization, encryption, and individual control. Messages don’t pass through central servers—they are relayed through independent nodes, and users choose which ones to trust. There’s no master switch to shut it all down. Every person owns their identity, their data, their connections. And no one—no state, no tech giant, no algorithm—can silence them.
In a world where cities fall and governments fail, Nostr is a city that cannot be occupied. A place for ideas, for networks, for freedom. A city that grows stronger the more people build within it.
-
@ 21335073:a244b1ad
2025-03-18 14:43:08Warning: This piece contains a conversation about difficult topics. Please proceed with caution.
TL;DR please educate your children about online safety.
Julian Assange wrote in his 2012 book Cypherpunks, “This book is not a manifesto. There isn’t time for that. This book is a warning.” I read it a few times over the past summer. Those opening lines definitely stood out to me. I wish we had listened back then. He saw something about the internet that few had the ability to see. There are some individuals who are so close to a topic that when they speak, it’s difficult for others who aren’t steeped in it to visualize what they’re talking about. I didn’t read the book until more recently. If I had read it when it came out, it probably would have sounded like an unknown foreign language to me. Today it makes more sense.
This isn’t a manifesto. This isn’t a book. There is no time for that. It’s a warning and a possible solution from a desperate and determined survivor advocate who has been pulling and unraveling a thread for a few years. At times, I feel too close to this topic to make any sense trying to convey my pathway to my conclusions or thoughts to the general public. My hope is that if nothing else, I can convey my sense of urgency while writing this. This piece is a watchman’s warning.
When a child steps online, they are walking into a new world. A new reality. When you hand a child the internet, you are handing them possibilities—good, bad, and ugly. This is a conversation about lowering the potential of negative outcomes of stepping into that new world and how I came to these conclusions. I constantly compare the internet to the road. You wouldn’t let a young child run out into the road with no guidance or safety precautions. When you hand a child the internet without any type of guidance or safety measures, you are allowing them to play in rush hour, oncoming traffic. “Look left, look right for cars before crossing.” We almost all have been taught that as children. What are we taught as humans about safety before stepping into a completely different reality like the internet? Very little.
I could never really figure out why many folks in tech, privacy rights activists, and hackers seemed so cold to me while talking about online child sexual exploitation. I always figured that as a survivor advocate for those affected by these crimes, that specific, skilled group of individuals would be very welcoming and easy to talk to about such serious topics. I actually had one hacker laugh in my face when I brought it up while I was looking for answers. I thought maybe this individual thought I was accusing them of something I wasn’t, so I felt bad for asking. I was constantly extremely disappointed and would ask myself, “Why don’t they care? What could I say to make them care more? What could I say to make them understand the crisis and the level of suffering that happens as a result of the problem?”
I have been serving minor survivors of online child sexual exploitation for years. My first case serving a survivor of this specific crime was in 2018—a 13-year-old girl sexually exploited by a serial predator on Snapchat. That was my first glimpse into this side of the internet. I won a national award for serving the minor survivors of Twitter in 2023, but I had been working on that specific project for a few years. I was nominated by a lawyer representing two survivors in a legal battle against the platform. I’ve never really spoken about this before, but at the time it was a choice for me between fighting Snapchat or Twitter. I chose Twitter—or rather, Twitter chose me. I heard about the story of John Doe #1 and John Doe #2, and I was so unbelievably broken over it that I went to war for multiple years. I was and still am royally pissed about that case. As far as I was concerned, the John Doe #1 case proved that whatever was going on with corporate tech social media was so out of control that I didn’t have time to wait, so I got to work. It was reading the messages that John Doe #1 sent to Twitter begging them to remove his sexual exploitation that broke me. He was a child begging adults to do something. A passion for justice and protecting kids makes you do wild things. I was desperate to find answers about what happened and searched for solutions. In the end, the platform Twitter was purchased. During the acquisition, I just asked Mr. Musk nicely to prioritize the issue of detection and removal of child sexual exploitation without violating digital privacy rights or eroding end-to-end encryption. Elon thanked me multiple times during the acquisition, made some changes, and I was thanked by others on the survivors’ side as well.
I still feel that even with the progress made, I really just scratched the surface with Twitter, now X. I left that passion project when I did for a few reasons. I wanted to give new leadership time to tackle the issue. Elon Musk made big promises that I knew would take a while to fulfill, but mostly I had been watching global legislation transpire around the issue, and frankly, the governments are willing to go much further with X and the rest of corporate tech than I ever would. My work begging Twitter to make changes with easier reporting of content, detection, and removal of child sexual exploitation material—without violating privacy rights or eroding end-to-end encryption—and advocating for the minor survivors of the platform went as far as my principles would have allowed. I’m grateful for that experience. I was still left with a nagging question: “How did things get so bad with Twitter where the John Doe #1 and John Doe #2 case was able to happen in the first place?” I decided to keep looking for answers. I decided to keep pulling the thread.
I never worked for Twitter. This is often confusing for folks. I will say that despite being disappointed in the platform’s leadership at times, I loved Twitter. I saw and still see its value. I definitely love the survivors of the platform, but I also loved the platform. I was a champion of the platform’s ability to give folks from virtually around the globe an opportunity to speak and be heard.
I want to be clear that John Doe #1 really is my why. He is the inspiration. I am writing this because of him. He represents so many globally, and I’m still inspired by his bravery. One child’s voice begging adults to do something—I’m an adult, I heard him. I’d go to war a thousand more lifetimes for that young man, and I don’t even know his name. Fighting has been personally dark at times; I’m not even going to try to sugarcoat it, but it has been worth it.
The data surrounding the very real crime of online child sexual exploitation is available to the public online at any time for anyone to see. I’d encourage you to go look at the data for yourself. I believe in encouraging folks to check multiple sources so that you understand the full picture. If you are uncomfortable just searching around the internet for information about this topic, use the terms “CSAM,” “CSEM,” “SG-CSEM,” or “AI Generated CSAM.” The numbers don’t lie—it’s a nightmare that’s out of control. It’s a big business. The demand is high, and unfortunately, business is booming. Organizations collect the data, tech companies often post their data, governments report frequently, and the corporate press has covered a decent portion of the conversation, so I’m sure you can find a source that you trust.
Technology is changing rapidly, which is great for innovation as a whole but horrible for the crime of online child sexual exploitation. Those wishing to exploit the vulnerable seem to be adapting to each technological change with ease. The governments are so far behind with tackling these issues that as I’m typing this, it’s borderline irrelevant to even include them while speaking about the crime or potential solutions. Technology is changing too rapidly, and their old, broken systems can’t even dare to keep up. Think of it like the governments’ “War on Drugs.” Drugs won. In this case as well, the governments are not winning. The governments are talking about maybe having a meeting on potentially maybe having legislation around the crimes. The time to have that meeting would have been many years ago. I’m not advocating for governments to legislate our way out of this. I’m on the side of educating and innovating our way out of this.
I have been clear while advocating for the minor survivors of corporate tech platforms that I would not advocate for any solution to the crime that would violate digital privacy rights or erode end-to-end encryption. That has been a personal moral position that I was unwilling to budge on. This is an extremely unpopular and borderline nonexistent position in the anti-human trafficking movement and online child protection space. I’m often fearful that I’m wrong about this. I have always thought that a better pathway forward would have been to incentivize innovation for detection and removal of content. I had no previous exposure to privacy rights activists or Cypherpunks—actually, I came to that conclusion by listening to the voices of MENA region political dissidents and human rights activists. After developing relationships with human rights activists from around the globe, I realized how important privacy rights and encryption are for those who need it most globally. I was simply unwilling to give more power, control, and opportunities for mass surveillance to big abusers like governments wishing to enslave entire nations and untrustworthy corporate tech companies to potentially end some portion of abuses online. On top of all of it, it has been clear to me for years that all potential solutions outside of violating digital privacy rights to detect and remove child sexual exploitation online have not yet been explored aggressively. I’ve been disappointed that there hasn’t been more of a conversation around preventing the crime from happening in the first place.
What has been tried is mass surveillance. In China, they are currently under mass surveillance both online and offline, and their behaviors are attached to a social credit score. Unfortunately, even on state-run and controlled social media platforms, they still have child sexual exploitation and abuse imagery pop up along with other crimes and human rights violations. They also have a thriving black market online due to the oppression from the state. In other words, even an entire loss of freedom and privacy cannot end the sexual exploitation of children online. It’s been tried. There is no reason to repeat this method.
It took me an embarrassingly long time to figure out why I always felt a slight coldness from those in tech and privacy-minded individuals about the topic of child sexual exploitation online. I didn’t have any clue about the “Four Horsemen of the Infocalypse.” This is a term coined by Timothy C. May in 1988. I would have been a child myself when he first said it. I actually laughed at myself when I heard the phrase for the first time. I finally got it. The Cypherpunks weren’t wrong about that topic. They were so spot on that it is borderline uncomfortable. I was mad at first that they knew that early during the birth of the internet that this issue would arise and didn’t address it. Then I got over it because I realized that it wasn’t their job. Their job was—is—to write code. Their job wasn’t to be involved and loving parents or survivor advocates. Their job wasn’t to educate children on internet safety or raise awareness; their job was to write code.
They knew that child sexual abuse material would be shared on the internet. They said what would happen—not in a gleeful way, but a prediction. Then it happened.
I equate it now to a concrete company laying down a road. As you’re pouring the concrete, you can say to yourself, “A terrorist might travel down this road to go kill many, and on the flip side, a beautiful child can be born in an ambulance on this road.” Who or what travels down the road is not their responsibility—they are just supposed to lay the concrete. I’d never go to a concrete pourer and ask them to solve terrorism that travels down roads. Under the current system, law enforcement should stop terrorists before they even make it to the road. The solution to this specific problem is not to treat everyone on the road like a terrorist or to not build the road.
So I understand the perceived coldness from those in tech. Not only was it not their job, but bringing up the topic was seen as the equivalent of asking a free person if they wanted to discuss one of the four topics—child abusers, terrorists, drug dealers, intellectual property pirates, etc.—that would usher in digital authoritarianism for all who are online globally.
Privacy rights advocates and groups have put up a good fight. They stood by their principles. Unfortunately, when it comes to corporate tech, I believe that the issue of privacy is almost a complete lost cause at this point. It’s still worth pushing back, but ultimately, it is a losing battle—a ticking time bomb.
I do think that corporate tech providers could have slowed down the inevitable loss of privacy at the hands of the state by prioritizing the detection and removal of CSAM when they all started online. I believe it would have bought some time, fewer would have been traumatized by that specific crime, and I do believe that it could have slowed down the demand for content. If I think too much about that, I’ll go insane, so I try to push the “if maybes” aside, but never knowing if it could have been handled differently will forever haunt me. At night when it’s quiet, I wonder what I would have done differently if given the opportunity. I’ll probably never know how much corporate tech knew and ignored in the hopes that it would go away while the problem continued to get worse. They had different priorities. The most voiceless and vulnerable exploited on corporate tech never had much of a voice, so corporate tech providers didn’t receive very much pushback.
Now I’m about to say something really wild, and you can call me whatever you want to call me, but I’m going to say what I believe to be true. I believe that the governments are either so incompetent that they allowed the proliferation of CSAM online, or they knowingly allowed the problem to fester long enough to have an excuse to violate privacy rights and erode end-to-end encryption. The US government could have seized the corporate tech providers over CSAM, but I believe that they were so useful as a propaganda arm for the regimes that they allowed them to continue virtually unscathed.
That season is done now, and the governments are making the issue a priority. It will come at a high cost. Privacy on corporate tech providers is virtually done as I’m typing this. It feels like a death rattle. I’m not particularly sure that we had much digital privacy to begin with, but the illusion of a veil of privacy feels gone.
To make matters slightly more complex, it would be hard to convince me that once AI really gets going, digital privacy will exist at all.
I believe that there should be a conversation shift to preserving freedoms and human rights in a post-privacy society.
I don’t want to get locked up because AI predicted a nasty post online from me about the government. I’m not a doomer about AI—I’m just going to roll with it personally. I’m looking forward to the positive changes that will be brought forth by AI. I see it as inevitable. A bit of privacy was helpful while it lasted. Please keep fighting to preserve what is left of privacy either way because I could be wrong about all of this.
On the topic of AI, the addition of AI to the horrific crime of child sexual abuse material and child sexual exploitation in multiple ways so far has been devastating. It’s currently out of control. The genie is out of the bottle. I am hopeful that innovation will get us humans out of this, but I’m not sure how or how long it will take. We must be extremely cautious around AI legislation. It should not be illegal to innovate even if some bad comes with the good. I don’t trust that the governments are equipped to decide the best pathway forward for AI. Source: the entire history of the government.
I have been personally negatively impacted by AI-generated content. Every few days, I get another alert that I’m featured again in what’s called “deep fake pornography” without my consent. I’m not happy about it, but what pains me the most is the thought that for a period of time down the road, many globally will experience what myself and others are experiencing now by being digitally sexually abused in this way. If you have ever had your picture taken and posted online, you are also at risk of being exploited in this way. Your child’s image can be used as well, unfortunately, and this is just the beginning of this particular nightmare. It will move to more realistic interpretations of sexual behaviors as technology improves. I have no brave words of wisdom about how to deal with that emotionally. I do have hope that innovation will save the day around this specific issue. I’m nervous that everyone online will have to ID verify due to this issue. I see that as one possible outcome that could help to prevent one problem but inadvertently cause more problems, especially for those living under authoritarian regimes or anyone who needs to remain anonymous online. A zero-knowledge proof (ZKP) would probably be the best solution to these issues. There are some survivors of violence and/or sexual trauma who need to remain anonymous online for various reasons. There are survivor stories available online of those who have been abused in this way. I’d encourage you seek out and listen to their stories.
There have been periods of time recently where I hesitate to say anything at all because more than likely AI will cover most of my concerns about education, awareness, prevention, detection, and removal of child sexual exploitation online, etc.
Unfortunately, some of the most pressing issues we’ve seen online over the last few years come in the form of “sextortion.” Self-generated child sexual exploitation (SG-CSEM) numbers are continuing to be terrifying. I’d strongly encourage that you look into sextortion data. AI + sextortion is also a huge concern. The perpetrators are using the non-sexually explicit images of children and putting their likeness on AI-generated child sexual exploitation content and extorting money, more imagery, or both from minors online. It’s like a million nightmares wrapped into one. The wild part is that these issues will only get more pervasive because technology is harnessed to perpetuate horror at a scale unimaginable to a human mind.
Even if you banned phones and the internet or tried to prevent children from accessing the internet, it wouldn’t solve it. Child sexual exploitation will still be with us until as a society we start to prevent the crime before it happens. That is the only human way out right now.
There is no reset button on the internet, but if I could go back, I’d tell survivor advocates to heed the warnings of the early internet builders and to start education and awareness campaigns designed to prevent as much online child sexual exploitation as possible. The internet and technology moved quickly, and I don’t believe that society ever really caught up. We live in a world where a child can be groomed by a predator in their own home while sitting on a couch next to their parents watching TV. We weren’t ready as a species to tackle the fast-paced algorithms and dangers online. It happened too quickly for parents to catch up. How can you parent for the ever-changing digital world unless you are constantly aware of the dangers?
I don’t think that the internet is inherently bad. I believe that it can be a powerful tool for freedom and resistance. I’ve spoken a lot about the bad online, but there is beauty as well. We often discuss how victims and survivors are abused online; we rarely discuss the fact that countless survivors around the globe have been able to share their experiences, strength, hope, as well as provide resources to the vulnerable. I do question if giving any government or tech company access to censorship, surveillance, etc., online in the name of serving survivors might not actually impact a portion of survivors negatively. There are a fair amount of survivors with powerful abusers protected by governments and the corporate press. If a survivor cannot speak to the press about their abuse, the only place they can go is online, directly or indirectly through an independent journalist who also risks being censored. This scenario isn’t hard to imagine—it already happened in China. During #MeToo, a survivor in China wanted to post their story. The government censored the post, so the survivor put their story on the blockchain. I’m excited that the survivor was creative and brave, but it’s terrifying to think that we live in a world where that situation is a necessity.
I believe that the future for many survivors sharing their stories globally will be on completely censorship-resistant and decentralized protocols. This thought in particular gives me hope. When we listen to the experiences of a diverse group of survivors, we can start to understand potential solutions to preventing the crimes from happening in the first place.
My heart is broken over the gut-wrenching stories of survivors sexually exploited online. Every time I hear the story of a survivor, I do think to myself quietly, “What could have prevented this from happening in the first place?” My heart is with survivors.
My head, on the other hand, is full of the understanding that the internet should remain free. The free flow of information should not be stopped. My mind is with the innocent citizens around the globe that deserve freedom both online and offline.
The problem is that governments don’t only want to censor illegal content that violates human rights—they create legislation that is so broad that it can impact speech and privacy of all. “Don’t you care about the kids?” Yes, I do. I do so much that I’m invested in finding solutions. I also care about all citizens around the globe that deserve an opportunity to live free from a mass surveillance society. If terrorism happens online, I should not be punished by losing my freedom. If drugs are sold online, I should not be punished. I’m not an abuser, I’m not a terrorist, and I don’t engage in illegal behaviors. I refuse to lose freedom because of others’ bad behaviors online.
I want to be clear that on a long enough timeline, the governments will decide that they can be better parents/caregivers than you can if something isn’t done to stop minors from being sexually exploited online. The price will be a complete loss of anonymity, privacy, free speech, and freedom of religion online. I find it rather insulting that governments think they’re better equipped to raise children than parents and caretakers.
So we can’t go backwards—all that we can do is go forward. Those who want to have freedom will find technology to facilitate their liberation. This will lead many over time to decentralized and open protocols. So as far as I’m concerned, this does solve a few of my worries—those who need, want, and deserve to speak freely online will have the opportunity in most countries—but what about online child sexual exploitation?
When I popped up around the decentralized space, I was met with the fear of censorship. I’m not here to censor you. I don’t write code. I couldn’t censor anyone or any piece of content even if I wanted to across the internet, no matter how depraved. I don’t have the skills to do that.
I’m here to start a conversation. Freedom comes at a cost. You must always fight for and protect your freedom. I can’t speak about protecting yourself from all of the Four Horsemen because I simply don’t know the topics well enough, but I can speak about this one topic.
If there was a shortcut to ending online child sexual exploitation, I would have found it by now. There isn’t one right now. I believe that education is the only pathway forward to preventing the crime of online child sexual exploitation for future generations.
I propose a yearly education course for every child of all school ages, taught as a standard part of the curriculum. Ideally, parents/caregivers would be involved in the education/learning process.
Course: - The creation of the internet and computers - The fight for cryptography - The tech supply chain from the ground up (example: human rights violations in the supply chain) - Corporate tech - Freedom tech - Data privacy - Digital privacy rights - AI (history-current) - Online safety (predators, scams, catfishing, extortion) - Bitcoin - Laws - How to deal with online hate and harassment - Information on who to contact if you are being abused online or offline - Algorithms - How to seek out the truth about news, etc., online
The parents/caregivers, homeschoolers, unschoolers, and those working to create decentralized parallel societies have been an inspiration while writing this, but my hope is that all children would learn this course, even in government ran schools. Ideally, parents would teach this to their own children.
The decentralized space doesn’t want child sexual exploitation to thrive. Here’s the deal: there has to be a strong prevention effort in order to protect the next generation. The internet isn’t going anywhere, predators aren’t going anywhere, and I’m not down to let anyone have the opportunity to prove that there is a need for more government. I don’t believe that the government should act as parents. The governments have had a chance to attempt to stop online child sexual exploitation, and they didn’t do it. Can we try a different pathway forward?
I’d like to put myself out of a job. I don’t want to ever hear another story like John Doe #1 ever again. This will require work. I’ve often called online child sexual exploitation the lynchpin for the internet. It’s time to arm generations of children with knowledge and tools. I can’t do this alone.
Individuals have fought so that I could have freedom online. I want to fight to protect it. I don’t want child predators to give the government any opportunity to take away freedom. Decentralized spaces are as close to a reset as we’ll get with the opportunity to do it right from the start. Start the youth off correctly by preventing potential hazards to the best of your ability.
The good news is anyone can work on this! I’d encourage you to take it and run with it. I added the additional education about the history of the internet to make the course more educational and fun. Instead of cleaning up generations of destroyed lives due to online sexual exploitation, perhaps this could inspire generations of those who will build our futures. Perhaps if the youth is armed with knowledge, they can create more tools to prevent the crime.
This one solution that I’m suggesting can be done on an individual level or on a larger scale. It should be adjusted depending on age, learning style, etc. It should be fun and playful.
This solution does not address abuse in the home or some of the root causes of offline child sexual exploitation. My hope is that it could lead to some survivors experiencing abuse in the home an opportunity to disclose with a trusted adult. The purpose for this solution is to prevent the crime of online child sexual exploitation before it occurs and to arm the youth with the tools to contact safe adults if and when it happens.
In closing, I went to hell a few times so that you didn’t have to. I spoke to the mothers of survivors of minors sexually exploited online—their tears could fill rivers. I’ve spoken with political dissidents who yearned to be free from authoritarian surveillance states. The only balance that I’ve found is freedom online for citizens around the globe and prevention from the dangers of that for the youth. Don’t slow down innovation and freedom. Educate, prepare, adapt, and look for solutions.
I’m not perfect and I’m sure that there are errors in this piece. I hope that you find them and it starts a conversation.
-
@ b0a838f2:34ed3f19
2025-05-23 17:57:02- Chibisafe - File uploader service that aims to to be easy to use and set up. It accepts files, photos, documents, anything you imagine and gives you back a shareable link for you to send to others. (Source Code)
MIT
Docker/Nodejs
- Digirecord - Record and share audio files (documentation in French). (Source Code)
AGPL-3.0
Nodejs/PHP
- elixire - Simple yet advanced screenshot uploading and link shortening service. (Clients)
AGPL-3.0
Python
- Enclosed - Minimalistic web application designed for sending private and secure notes. (Demo, Source Code)
Apache-2.0
Docker/Nodejs
- Files Sharing - File sharing application based on unique and temporary links.
GPL-3.0
PHP/Docker
- Gokapi - Lightweight server to share files, which expire after a set amount of downloads or days. Similar to the discontinued Firefox Send, with the difference that only the admin is allowed to upload files.
GPL-3.0
Go/Docker
- goploader - Easy file sharing with server-side encryption, curl/httpie/wget compliant. (Source Code)
MIT
Go
- GoSƐ - Modern file-uploader focusing on scalability and simplicity. It only depends on a S3 storage backend and hence scales horizontally without the need for additional databases or caches.
Apache-2.0
Go/Docker
- OnionShare - Securely and anonymously share a file of any size.
GPL-3.0
Python/deb
- Pairdrop - Local file sharing in your browser, inspired by Apple's AirDrop (fork of Snapdrop). (Source Code)
GPL-3.0
Docker
- PicoShare - Minimalist, easy-to-host service for sharing images and other files. (Demo, Source Code)
AGPL-3.0
Go/Docker
- Picsur - Simple imaging hosting platform that allows you to easily host, edit, and share images. (Demo)
AGPL-3.0
Docker
- PictShare - Multi lingual image hosting service with a simple resizing and upload API. (Source Code)
Apache-2.0
PHP/Docker
- Pingvin Share - File sharing platform that combines lightness and beauty, perfect for seamless and efficient file sharing. (Demo)
BSD-2-Clause
Docker/Nodejs
- Plik - Scalable and friendly temporary file upload system. (Demo)
MIT
Go/Docker
- ProjectSend - Upload files and assign them to specific clients you create. Give access to those files to your clients. (Source Code)
GPL-2.0
PHP
- PsiTransfer - Simple file sharing solution with robust up-/download-resume and password protection.
BSD-2-Clause
Nodejs
- QuickShare - Quick and simple file sharing between different devices. (Source Code)
LGPL-3.0
Docker/Go
- Sharry - Share files easily over the internet between authenticated and anonymous users (both ways) with resumable up- and downloads.
GPL-3.0
Scala/Java/deb/Docker
- Shifter - A simple, self-hosted file-sharing web app, powered by Django.
MIT
Docker
- Slink - Image sharing platform designed to give users complete control over their media sharing experience. (Source Code)
AGPL-3.0
Docker
- transfer.sh - Easy file sharing from the command line.
MIT
Go
- Uguu - Stores files and deletes after X amount of time.
MIT
PHP
- Uploady - Uploady is a simple file uploader script with multi file upload support.
MIT
PHP
- XBackBone - A simple, fast and lightweight file manager with instant sharing tools integration, like ShareX (a free and open-source screenshot utility for Windows). (Source Code)
AGPL-3.0
PHP/Docker
- Zipline - A lightweight, fast and reliable file sharing server that is commonly used with ShareX, offering a react-based Web UI and fast API.
MIT
Docker/Nodejs
- Chibisafe - File uploader service that aims to to be easy to use and set up. It accepts files, photos, documents, anything you imagine and gives you back a shareable link for you to send to others. (Source Code)
-
@ b0a838f2:34ed3f19
2025-05-23 17:56:45- bittorrent-tracker - Simple, robust, BitTorrent tracker (client and server) implementation. (Source Code)
MIT
Nodejs
- Deluge - Lightweight, cross-platform BitTorrent client. (Source Code)
GPL-3.0
Python/deb
- qBittorrent - Free cross-platform bittorrent client with a feature rich Web UI for remote access. (Source Code)
GPL-2.0
C++
- Send - Simple, private, end to end encrypted temporary file sharing, originally built by Mozilla. (Clients)
MPL-2.0
Nodejs/Docker
- slskd
⚠
- A modern client-server application for the Soulseek file sharing network.AGPL-3.0
Docker/C#
- Transmission - Fast, easy, free Bittorrent client. (Source Code)
GPL-3.0
C++/deb
- Webtor - Web-based torrent client with instant audio/video streaming. (Demo)
MIT
Docker
- bittorrent-tracker - Simple, robust, BitTorrent tracker (client and server) implementation. (Source Code)
-
@ ec42c765:328c0600
2024-12-13 08:16:32Nostr Advent Calendar 2024 の 12日目の記事です。
昨日の 12/11 は きりの さんの 2024年のNostrリレー運営を振り返る でした。
nostr-zap-view 作った
リポジトリ: https://github.com/Lokuyow/nostr-zap-view/
動作確認ページ: https://lokuyow.github.io/nostr-zap-view/それ何?
特定の誰かや何かに宛てたZap(投げ銭)を一覧できるやつ
を
自分のWebサイトに設置できるやつ
自分のサイトに設置した例 * SNSリンク集ページ(最下部): https://lokuyow.github.io/
おいくらサッツ(Zap一覧ボタン): https://osats.money/
今日からビットコ(最下部): https://lokuyow.github.io/btc-dca-simulator/なんで作ったの?
私の去年のアドベントカレンダー
【Nostr】Webサイトにビットコインの投げ銭ボタンを設置しよう【Zap】
https://spotlight.soy/detail?article_id=ucd7cbrql/
が前提になってるけど長いので要約すると * ZapするやつはあるけどZap見るやつがないので欲しい * ZapをNostr(の典型的なkind:1クライアント)内だけに留めるのはもったいない * Webサイトの広告うざいからZap(的な何か)で置き換わって欲しいお前だれ?
非エンジニア、非プログラマー
AIにコード出させてるだけ人作った感想
できた
作った感想2
完成してから気付いた本当に作りたかったもの
こういうところにそのままZapを表示できる感じにしたい
(ここまでちゃんとした商業ブログでなく)個人のブログやHPの端っこに「Sponsored by」欄があって名前が表示される感じ
もうZapっていう文字もビットコインっていう文字もNostrも出さなくていいし説明もしなくていいのでは感がある
イメージはWebサイトを対象にしたニコニ広告 + スーパーチャット + 祭りとか神社の奉納者一覧
で思ったのは
個人からの投げ銭なら推し活的なものにしかならないけど
企業がNostrにアカウントを作ってサイトに投げ銭をしたら企業の広告になるんでは!?
~~企業がNostrにアカウントを!?デリヘルしか見たことない!~~今後
思いつき、予定は未定
* ボタン→ダイアログ形式でなくバナー、Embed形式にしてページアクセスですぐ見れるようにする * 多分リレーに負荷がかかるのでなんかする * Zapの文字は出さず「Sponsored by」等にする * 単純な最新順でなくする * 少額Zapをトリミング * 一定期間(一か月など)ごとで金額順にソート * 多分リレーに負荷がかかるのでなんかする * 今は投稿宛てのZapをWebサイト宛てのZapと勝手に言い張ってるだけなのでちゃんとWebサイト宛てのZapにする * NIPの提案が必要 * ウォレットの準拠も必要 * リレー(wss://~)宛てのZapもできてほしい将来
インターネットのすべてに投げ銭をさせろ
おわり
明日は mono さんの Open Sats 申請編 です!!
-
@ 97c70a44:ad98e322
2025-02-17 14:29:00Everyone knows that relays are central to how nostr works - they're even in the name: Notes and Other Stuff Transmitted by Relays. As time goes on though, there are three other letters which are becoming conspicuously absent from our beloved and ambiguously pronounceable acronym - "D", "V", and "M".
For the uninitiated, DVM stands for "data vending machines". They're actually sort of hard to describe — in technical terms they act more like clients, since they simply read events from and publish events to relays. In most cases though, these events are part of a request/response flow initiated by users elsewhere on the network. In practice, DVMs are bots, but there's also nothing to prevent the work they do from being powered by human interaction. They're an amazingly flexible tool for building anything from custom feeds, to transcription services, to chatbots, to protocol gateways.
The hype cycle for DVMs seems to have reached escape velocity in a way few other things have - zaps being the possible exception. But what exactly DVMs are remains something of a mystery to many nostr developers - and how to build one may as well be written on clay tablets.
This blog post is designed to address that - below is a soup to nuts (no nutzaps though) guide to building a DVM flow, both from the client and the server side.
Here's what we'll be covering:
- Discovering DVM metadata
- Basic request/response flow
- Implementing a minimal example
Let's get started!
DVM Metadata
First of all, it's helpful to know how DVMs are reified on the nostr network. While not strictly necessary, this can be useful for discovering DVMs and presenting them to users, and for targeting specific DVMs we want a response from.
NIP 89 goes into this in more detail, but the basic idea is that anyone can create a
kind 31990
"application handler" event and publish it to the network with their own (or a dedicated) public key. This handler was originally intended to advertise clients, but has been re-purposed for DVM listings as well.Here's what the "Fluffy Frens" handler looks like:
json { "content": "{\"name\": \"Fluffy Frens\", \"picture\": \"https://image.nostr.build/f609311532c470f663e129510a76c9a1912ae9bc4aaaf058e5ba21cfb512c88e.jpg\", \"about\": \"I show recent notes about animals\", \"lud16\": \"discovery_content_fluffy@nostrdvm.com\", \"supportsEncryption\": true, \"acceptsNutZaps\": false, \"personalized\": false, \"amount\": \"free\", \"nip90Params\": {\"max_results\": {\"required\": false, \"values\": [], \"description\": \"The number of maximum results to return (default currently 100)\"}}}", "created_at": 1738874694, "id": "0aa8d1f19cfe17e00ce55ca86fea487c83be39a1813601f56f869abdfa776b3c", "kind": 31990, "pubkey": "7b7373dd58554ff4c0d28b401b9eae114bd92e30d872ae843b9a217375d66f9d", "sig": "22403a7996147da607cf215994ab3b893176e5302a44a245e9c0d91214e4c56fae40d2239dce58ea724114591e8f95caed2ba1a231d09a6cd06c9f0980e1abd5", "tags": [ ["k", "5300"], ["d", "198650843898570c"] ] }
This event is rendered in various clients using the kind-0-style metadata contained in the
content
field, allowing users to browse DVMs and pick one for their use case. If a user likes using a particular DVM, they might publish akind 31989
"application recommendation", which other users can use to find DVMs that are in use within their network.Note the
k
tag in the handler event - this allows DVMs to advertise support only for specific job types. It's also important to note that even though the spec doesn't cover relay selection, most clients use the publisher'skind 10002
event to find out where the DVM listens for events.If this looks messy to you, you're right. See this PR for a proposal to split DVMs out into their own handler kind, give them a dedicated pubkey along with dedicated metadata and relay selections, and clean up the data model a bit.
DVM Flow
Now that we know what a DVM looks like, we can start to address how they work. My explanation below will elide some of the detail involved in NIP 90 for simplicity, so I encourage you to read the complete spec.
The basic DVM flow can be a little (very) confusing to work with, because in essence it's a request/response paradigm, but it has some additional wrinkles.
First of all, the broker for the request isn't abstracted away as is usually the case with request/response flows. Regular HTTP requests involve all kinds of work in the background - from resolving domain names to traversing routers, VPNs, and ISP infrastructure. But developers don't generally have to care about all these intermediaries.
With DVMs, on the other hand, the essential complexity of relay selection can't simply be ignored. DVMs often advertise their own relay selections, which should be used rather than a hard-coded or randomly chosen relay to ensure messages are delivered. The benefit of this is that DVMs can avoid censorship, just as users can, by choosing relays that are willing to broker their activity. DVMs can even select multiple relays to broker requests, which means that clients might receive multiple copies of the same response.
Secondly, the DVM request/response model is far more fluid than is usually the case with request/response flows. There are a set of standard practices, but the flow is flexible enough to admit exceptions to these conventions for special use cases. Here are some examples:
- Normally, clients p-tag the DVM they wish to address. But if a client isn't picky about where a response comes from, they may choose to send an open request to the network and collect responses from multiple DVMs simultaneously.
- Normally, a client creates a request before collecting responses using a subscription with an e-tag filter matching the request event. But clients may choose to skip the request step entirely and collect responses from the network that have already been created. This can be useful for computationally intensive tasks or common queries, where a single result can be re-used multiple times.
- Sometimes, a DVM may respond with a
kind 7000
job status event to let clients know they're working on the request. This is particularly useful for longer-running tasks, where feedback is useful for building a responsive UX. - There are also some details in the spec regarding monetization, parameterization, error codes, encryption, etc.
Example DVM implementation
For the purposes of this blog post, I'll keep things simple by illustrating the most common kind of DVM flow: a
kind 5300
content discovery request, addressed to a particular DVM. If you're interested in other use cases, please visit data-vending-machines.org for additional documented kinds.The basic flow looks like this:
- The DVM starts by listening for
kind 5300
job requests on some relays it has selected and advertised via NIP 89 (more on that later) - A client creates a request event of
kind 5300
, p-tagged with the DVM's pubkey and sends it to the DVM's relay selections. - The DVM receives the event and processes it, issuing optional
kind 7000
job status events, and eventually issuing akind 6300
job result event (job result event kinds are always 1000 greater than the request's kind). - The client listens to the same relays for a response, and when it comes through does whatever it wants to with it.
Here's a swimlane diagram of that flow:
To avoid massive code samples, I'm going to implement our DVM entirely using nak (backed by the power of the human mind).
The first step is to start our DVM listening for requests that it wants to respond to. Nak's default pubkey is
79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798
, so we'll only listen for requests sent to nak.bash nak req -k 5300 -t p=79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798
This gives us the following filter:
json ["REQ","nak",{"kinds":[5300],"#p":["79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"]}]
To open a subscription to
nos.lol
and stream job requests, add--stream wss://nos.lol
to the previous request and leave it running.Next, open a new terminal window for our "client" and create a job request. In this case, there's nothing we need to provide as
input
, but we'll include it just for illustration. It's also good practice to include anexpiration
tag so we're not asking relays to keep our ephemeral requests forever.bash nak event -k 5300 -t p=79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798 -t expiration=$(( $(date +%s) + 30 )) -t input=hello
Here's what comes out:
json { "kind": 5300, "id": "0e419d0b3c5d29f86d2132a38ca29cdfb81a246e1a649cb2fe1b9ed6144ebe30", "pubkey": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798", "created_at": 1739407684, "tags": [ ["p", "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"], ["expiration", "1739407683"], ["input", "hello"] ], "content": "", "sig": "560807548a75779a7a68c0ea73c6f097583e2807f4bb286c39931e99a4e377c0a64af664fa90f43e01ddd1de2e9405acd4e268f1bf3bc66f0ed5a866ea093966" }
Now go ahead and publish this event by adding
nos.lol
to the end of yournak
command. If all goes well, you should see your event pop up in your "dvm" subscription. If so, great! That's half of the flow.Next, we'll want our client to start listening for
kind 6300
responses to the request. In your "client" terminal window, run:bash nak req -k 6300 -t e=<your-eventid-here> --stream nos.lol
Note that if you only want to accept responses from the specified DVM (a good policy in general to avoid spam) you would include a
p
tag here. I've omitted it for brevity. Also notice thek
tag specifies the request kind plus1000
- this is just a convention for what kinds requests and responses use.Now, according to data-vending-machines.org,
kind 5300
responses are supposed to put a JSON-encoded list of e-tags in thecontent
field of the response. Weird, but ok. Stop the subscription in your "dvm" terminal and respond to your "client" with a recommendation to read my first note:bash nak event -k 6300 -t e=a65665a3a4ca2c0d7b7582f4f0d073cd1c83741c25a07e98d49a43e46d258caf -c '[["e","214f5898a7b75b7f95d9e990b706758ea525fe86db54c1a28a0f418c357f9b08","wss://nos.lol/"]]' nos.lol
Here's the response event we're sending:
json { "kind": 6300, "id": "bb5f38920cbca15d3c79021f7d0051e82337254a84c56e0f4182578e4025232e", "pubkey": "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798", "created_at": 1739408411, "tags": [ ["e", "a65665a3a4ca2c0d7b7582f4f0d073cd1c83741c25a07e98d49a43e46d258caf"] ], "content": "[[\"e\",\"214f5898a7b75b7f95d9e990b706758ea525fe86db54c1a28a0f418c357f9b08\",\"wss://nos.lol/\"]]", "sig": "a0fe2c3419c5c54cf2a6d9a2a5726b2a5b766d3c9e55d55568140979354003aacb038e90bdead43becf5956faa54e3b60ff18c0ea4d8e7dfdf0c8dd97fb24ff9" }
Notice the
e
tag targets our original request.This should result in the job result event showing up in our "client" terminal. Success!
If something isn't working, I've also create a video of the full process with some commentary which you can find here.
Note that in practice, DVMs can be much more picky about the requests they will respond to, due to implementations failing to follow Postel's law. Hopefully that will improve over time. For now, here are a few resources that are useful when working with or developing DVMs:
Conclusion
I started this post by hinting that DVMs might be as fundamental as relays are to making nostr work. But (apart from the fact that we'd end up with an acronym like DVMNOSTRZ+*, which would only exascerbate the pronounciation wars (if such a thing were possible)), that's not exactly true.
DVMs have emerged as a central paradigm in the nostr world because they're a generalization of a design pattern unique to nostr's architecture - but which exists in many other places, including NIP 46 signer flows and NIP 47 wallet connect. Each of these sub-protocols works by using relays as neutral brokers for requests in order to avoid coupling services to web addresses.
This approach has all kinds of neat benefits, not least of which is allowing service providers to host their software without having to accept incoming TCP connections. But it's really an emergent property of relays, which not only are useful for brokering communication between users (aka storing events), but also brokering communication between machines.
The possibilities of this architecture have only started to emerge, so be on the lookout for new applications, and don't be afraid to experiment - just please, don't serialize json inside json 🤦♂️
-
@ f9cf4e94:96abc355
2025-01-18 06:09:50Para esse exemplo iremos usar: | Nome | Imagem | Descrição | | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | | Raspberry PI B+ |
| Cortex-A53 (ARMv8) 64-bit a 1.4GHz e 1 GB de SDRAM LPDDR2, | | Pen drive |
| 16Gb |
Recomendo que use o Ubuntu Server para essa instalação. Você pode baixar o Ubuntu para Raspberry Pi aqui. O passo a passo para a instalação do Ubuntu no Raspberry Pi está disponível aqui. Não instale um desktop (como xubuntu, lubuntu, xfce, etc.).
Passo 1: Atualizar o Sistema 🖥️
Primeiro, atualize seu sistema e instale o Tor:
bash apt update apt install tor
Passo 2: Criar o Arquivo de Serviço
nrs.service
🔧Crie o arquivo de serviço que vai gerenciar o servidor Nostr. Você pode fazer isso com o seguinte conteúdo:
```unit [Unit] Description=Nostr Relay Server Service After=network.target
[Service] Type=simple WorkingDirectory=/opt/nrs ExecStart=/opt/nrs/nrs-arm64 Restart=on-failure
[Install] WantedBy=multi-user.target ```
Passo 3: Baixar o Binário do Nostr 🚀
Baixe o binário mais recente do Nostr aqui no GitHub.
Passo 4: Criar as Pastas Necessárias 📂
Agora, crie as pastas para o aplicativo e o pendrive:
bash mkdir -p /opt/nrs /mnt/edriver
Passo 5: Listar os Dispositivos Conectados 🔌
Para saber qual dispositivo você vai usar, liste todos os dispositivos conectados:
bash lsblk
Passo 6: Formatando o Pendrive 💾
Escolha o pendrive correto (por exemplo,
/dev/sda
) e formate-o:bash mkfs.vfat /dev/sda
Passo 7: Montar o Pendrive 💻
Monte o pendrive na pasta
/mnt/edriver
:bash mount /dev/sda /mnt/edriver
Passo 8: Verificar UUID dos Dispositivos 📋
Para garantir que o sistema monte o pendrive automaticamente, liste os UUID dos dispositivos conectados:
bash blkid
Passo 9: Alterar o
fstab
para Montar o Pendrive Automáticamente 📝Abra o arquivo
/etc/fstab
e adicione uma linha para o pendrive, com o UUID que você obteve no passo anterior. A linha deve ficar assim:fstab UUID=9c9008f8-f852 /mnt/edriver vfat defaults 0 0
Passo 10: Copiar o Binário para a Pasta Correta 📥
Agora, copie o binário baixado para a pasta
/opt/nrs
:bash cp nrs-arm64 /opt/nrs
Passo 11: Criar o Arquivo de Configuração 🛠️
Crie o arquivo de configuração com o seguinte conteúdo e salve-o em
/opt/nrs/config.yaml
:yaml app_env: production info: name: Nostr Relay Server description: Nostr Relay Server pub_key: "" contact: "" url: http://localhost:3334 icon: https://external-content.duckduckgo.com/iu/?u= https://public.bnbstatic.com/image/cms/crawler/COINCU_NEWS/image-495-1024x569.png base_path: /mnt/edriver negentropy: true
Passo 12: Copiar o Serviço para o Diretório de Systemd ⚙️
Agora, copie o arquivo
nrs.service
para o diretório/etc/systemd/system/
:bash cp nrs.service /etc/systemd/system/
Recarregue os serviços e inicie o serviço
nrs
:bash systemctl daemon-reload systemctl enable --now nrs.service
Passo 13: Configurar o Tor 🌐
Abra o arquivo de configuração do Tor
/var/lib/tor/torrc
e adicione a seguinte linha:torrc HiddenServiceDir /var/lib/tor/nostr_server/ HiddenServicePort 80 127.0.0.1:3334
Passo 14: Habilitar e Iniciar o Tor 🧅
Agora, ative e inicie o serviço Tor:
bash systemctl enable --now tor.service
O Tor irá gerar um endereço
.onion
para o seu servidor Nostr. Você pode encontrá-lo no arquivo/var/lib/tor/nostr_server/hostname
.
Observações ⚠️
- Com essa configuração, os dados serão salvos no pendrive, enquanto o binário ficará no cartão SD do Raspberry Pi.
- O endereço
.onion
do seu servidor Nostr será algo como:ws://y3t5t5wgwjif<exemplo>h42zy7ih6iwbyd.onion
.
Agora, seu servidor Nostr deve estar configurado e funcionando com Tor! 🥳
Se este artigo e as informações aqui contidas forem úteis para você, convidamos a considerar uma doação ao autor como forma de reconhecimento e incentivo à produção de novos conteúdos.
-
@ b0a838f2:34ed3f19
2025-05-23 17:56:27- GarageHQ - Geo-distributed, S3‑compatible storage service that can fulfill many needs. (Source Code)
AGPL-3.0
Docker/Rust
- Minio - Object storage server compatible with Amazon S3 APIs. (Source Code)
AGPL-3.0
Go/Docker/K8S
- SeaweedFS - SeaweedFS is an open source distributed file system supporting WebDAV, S3 API, FUSE mount, HDFS, etc, optimized for lots of small files, and easy to add capacity.
Apache-2.0
Go
- SFTPGo - Flexible, fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support.
AGPL-3.0
Go/deb/Docker
- Zenko CloudServer - Zenko CloudServer, an open-source implementation of a server handling the Amazon S3 protocol. (Source Code)
Apache-2.0
Docker/Nodejs
- ZOT OCI Registry - A production-ready vendor-neutral OCI-native container image registry. (Demo, Source Code)
Apache-2.0
Go/Docker
- GarageHQ - Geo-distributed, S3‑compatible storage service that can fulfill many needs. (Source Code)
-
@ 6bcc27d2:b67d296e
2024-10-21 03:54:32yugoです。 この記事は「Nostrasia2024 逆アドベントカレンダー」10/19の分です。Nostrasiaの当日はリアルタイムで配信を視聴していました。Nostrを使ってアプリケーションの再発明をすべきという発表を聴き、自分だったらどんなものを作ってみたいかを考えて少し調べたり試みたりしたのでその記録を書きます。また、超簡単なものですがおそらく世界初となるvisionOS対応のNostrクライアントをつくってみたので最後の方に紹介します。
アプリケーションを再発明する話があったのは、「What is Nostr Other Stuff?」と題したkaijiさんの発表でした。
Nostrプロトコルを使って既存のアプリケーションを再発明することで、ユーザ体験を損なわずにゆるやかな分散を促すことができ、プロトコルとしてのNostrも成長していくというような内容でした。
自分はまだNostrで何かをつくった経験はなかったので、実装に必要な仕様の知識がほとんどない状態からどのようなアプリケーションをつくってみたいかを考えました。
最初に思いついたのは、Scrapboxのようなネットワーク型のナレッジベースです。自分は最近visionOS勉強会をやっており、勉強会でナレッジを共有する手段としてScrapboxの導入を検討していました。
Nostrコミュニティにも有志によるScrapboxがありますが、Nostrクライアントがあればそれを使うだろうから同じくらいの実用性を備えたクライアントはまだ存在しないのではないかという見立てでした。
長文投稿やpublic chatなどの機能を組み合わせることで実現できるだろうか。そう思っていた矢先、NIP-54のWikiという規格があることを知りました。
https://github.com/nostr-protocol/nips/blob/master/54.md
まだちゃんとは読めていないですが、Scrapboxもwikiソフトウェアだし参考になりそうと思っています。正式な仕様に組み込まれていないようで、採用しているクライアントはfiatjafによるリファレンス実装(?)のwikistrくらいしか見つかりませんでした。
Scrapboxのようなナレッジベースを志向するNostrクライアントがあれば、後述するvisionOS対応クライアントの存在もありアカウントを使いまわせて嬉しいので試してみたいです。もし他にも似たようなサービスをどなたか知っていたら教えてください。
また現在は、勉強会やワークショップ、ハッカソンなどのコラボレーションワークを支援するためのツールを自分たちでも開発しています。Apple Vision Proに搭載されているvisionOSというプラットフォームで動作します。
https://image.nostr.build/14f0c1b8fbe5ce7754825c01b09280a4c22f87bbf3c2fa6d60dd724f98919c34.png
この画面で自分が入りたいスペースを選んで共有体験を開始します。
スライドなどのコンテンツや自らのアバターを同期させることで、遠隔地にいてもまるでオフラインかのように同じ空間を共有することが可能になります。
https://image.nostr.build/cfb75d3db2a9b9cd39f502d6426d5ef4f264b3d5d693b6fc9762735d2922b85c.jpg
ということなので、急遽visionOS対応のクライアントを作ってみました。検索しても1つも事例が出てこなかったので多分まだ世界で実装しているアプリはないのではないでしょうか。
とはいえ、クライアントを名乗っているもののまだ大した機能はなく、リレーからデータを取得するだけの読み取り専用です。
https://image.nostr.build/96e088cc6a082528682989ccc12b4312f9cb6277656e491578e32a0851ce50fe.png
画像では自分のプロフィールデータをリレーから取得しています。
まだどのライブラリもvisionOSに対応していなかったりで手こずったものの仕様の勉強になりました。
ただvisionOSアプリはiOSアプリ同様NIP-7が使えないので秘密鍵を自分で保管しなくてはならず、今後どう対処すべきかわかりかねています。これから時間ある時に少しずつ調べていこうと思っていますが、ネイティブアプリの秘密鍵周りはあまりリソースが多くないようにも感じました。もしどなたかその辺の実装に詳しい方いたら教えていただけると嬉しいです。
準備ができたらそのうちコードも公開したいと思っています。
これから少しずつ色んな機能を実装しながらNostrで遊んでいきたいです!
-
@ 1bda7e1f:bb97c4d9
2025-01-02 05:19:08Tldr
- Nostr is an open and interoperable protocol
- You can integrate it with workflow automation tools to augment your experience
- n8n is a great low/no-code workflow automation tool which you can host yourself
- Nostrobots allows you to integrate Nostr into n8n
- In this blog I create some workflow automations for Nostr
- A simple form to delegate posting notes
- Push notifications for mentions on multiple accounts
- Push notifications for your favourite accounts when they post a note
- All workflows are provided as open source with MIT license for you to use
Inter-op All The Things
Nostr is a new open social protocol for the internet. This open nature exciting because of the opportunities for interoperability with other technologies. In Using NFC Cards with Nostr I explored the
nostr:
URI to launch Nostr clients from a card tap.The interoperability of Nostr doesn't stop there. The internet has many super-powers, and Nostr is open to all of them. Simply, there's no one to stop it. There is no one in charge, there are no permissioned APIs, and there are no risks of being de-platformed. If you can imagine technologies that would work well with Nostr, then any and all of them can ride on or alongside Nostr rails.
My mental model for why this is special is Google Wave ~2010. Google Wave was to be the next big platform. Lars was running it and had a big track record from Maps. I was excited for it. Then, Google pulled the plug. And, immediately all the time and capital invested in understanding and building on the platform was wasted.
This cannot happen to Nostr, as there is no one to pull the plug, and maybe even no plug to pull.
So long as users demand Nostr, Nostr will exist, and that is a pretty strong guarantee. It makes it worthwhile to invest in bringing Nostr into our other applications.
All we need are simple ways to plug things together.
Nostr and Workflow Automation
Workflow automation is about helping people to streamline their work. As a user, the most common way I achieve this is by connecting disparate systems together. By setting up one system to trigger another or to move data between systems, I can solve for many different problems and become way more effective.
n8n for workflow automation
Many workflow automation tools exist. My favourite is n8n. n8n is a low/no-code workflow automation platform which allows you to build all kinds of workflows. You can use it for free, you can self-host it, it has a user-friendly UI and useful API. Vs Zapier it can be far more elaborate. Vs Make.com I find it to be more intuitive in how it abstracts away the right parts of the code, but still allows you to code when you need to.
Most importantly you can plug anything into n8n: You have built-in nodes for specific applications. HTTP nodes for any other API-based service. And community nodes built by individual community members for any other purpose you can imagine.
Eating my own dogfood
It's very clear to me that there is a big design space here just demanding to be explored. If you could integrate Nostr with anything, what would you do?
In my view the best way for anyone to start anything is by solving their own problem first (aka "scratching your own itch" and "eating your own dogfood"). As I get deeper into Nostr I find myself controlling multiple Npubs – to date I have a personal Npub, a brand Npub for a community I am helping, an AI assistant Npub, and various testing Npubs. I need ways to delegate access to those Npubs without handing over the keys, ways to know if they're mentioned, and ways to know if they're posting.
I can build workflows with n8n to solve these issues for myself to start with, and keep expanding from there as new needs come up.
Running n8n with Nostrobots
I am mostly non-technical with a very helpful AI. To set up n8n to work with Nostr and operate these workflows should be possible for anyone with basic technology skills.
- I have a cheap VPS which currently runs my HAVEN Nostr Relay and Albyhub Lightning Node in Docker containers,
- My objective was to set up n8n to run alongside these in a separate Docker container on the same server, install the required nodes, and then build and host my workflows.
Installing n8n
Self-hosting n8n could not be easier. I followed n8n's Docker-Compose installation docs–
- Install Docker and Docker-Compose if you haven't already,
- Create your
docker-compose.yml
and.env
files from the docs, - Create your data folder
sudo docker volume create n8n_data
, - Start your container with
sudo docker compose up -d
, - Your n8n instance should be online at port
5678
.
n8n is free to self-host but does require a license. Enter your credentials into n8n to get your free license key. You should now have access to the Workflow dashboard and can create and host any kind of workflows from there.
Installing Nostrobots
To integrate n8n nicely with Nostr, I used the Nostrobots community node by Ocknamo.
In n8n parlance a "node" enables certain functionality as a step in a workflow e.g. a "set" node sets a variable, a "send email" node sends an email. n8n comes with all kinds of "official" nodes installed by default, and Nostr is not amongst them. However, n8n also comes with a framework for community members to create their own "community" nodes, which is where Nostrobots comes in.
You can only use a community node in a self-hosted n8n instance (which is what you have if you are running in Docker on your own server, but this limitation does prevent you from using n8n's own hosted alternative).
To install a community node, see n8n community node docs. From your workflow dashboard–
- Click the "..." in the bottom left corner beside your username, and click "settings",
- Cilck "community nodes" left sidebar,
- Click "Install",
- Enter the "npm Package Name" which is
n8n-nodes-nostrobots
, - Accept the risks and click "Install",
- Nostrobots is now added to your n8n instance.
Using Nostrobots
Nostrobots gives you nodes to help you build Nostr-integrated workflows–
- Nostr Write – for posting Notes to the Nostr network,
- Nostr Read – for reading Notes from the Nostr network, and
- Nostr Utils – for performing certain conversions you may need (e.g. from bech32 to hex).
Nostrobots has good documentation on each node which focuses on simple use cases.
Each node has a "convenience mode" by default. For example, the "Read" Node by default will fetch Kind 1 notes by a simple filter, in Nostrobots parlance a "Strategy". For example, with Strategy set to "Mention" the node will accept a pubkey and fetch all Kind 1 notes that Mention the pubkey within a time period. This is very good for quick use.
What wasn't clear to me initially (until Ocknamo helped me out) is that advanced use cases are also possible.
Each node also has an advanced mode. For example, the "Read" Node can have "Strategy" set to "RawFilter(advanced)". Now the node will accept json (anything you like that complies with NIP-01). You can use this to query Notes (Kind 1) as above, and also Profiles (Kind 0), Follow Lists (Kind 3), Reactions (Kind 7), Zaps (Kind 9734/9735), and anything else you can think of.
Creating and adding workflows
With n8n and Nostrobots installed, you can now create or add any kind of Nostr Workflow Automation.
- Click "Add workflow" to go to the workflow builder screen,
- If you would like to build your own workflow, you can start with adding any node. Click "+" and see what is available. Type "Nostr" to explore the Nostrobots nodes you have added,
- If you would like to add workflows that someone else has built, click "..." in the top right. Then click "import from URL" and paste in the URL of any workflow you would like to use (including the ones I share later in this article).
Nostr Workflow Automations
It's time to build some things!
A simple form to post a note to Nostr
I started very simply. I needed to delegate the ability to post to Npubs that I own in order that a (future) team can test things for me. I don't want to worry about managing or training those people on how to use keys, and I want to revoke access easily.
I needed a basic form with credentials that posted a Note.
For this I can use a very simple workflow–
- A n8n Form node – Creates a form for users to enter the note they wish to post. Allows for the form to be protected by a username and password. This node is the workflow "trigger" so that the workflow runs each time the form is submitted.
- A Set node – Allows me to set some variables, in this case I set the relays that I intend to use. I typically add a Set node immediately following the trigger node, and put all the variables I need in this. It helps to make the workflows easier to update and maintain.
- A Nostr Write node (from Nostrobots) – Writes a Kind-1 note to the Nostr network. It accepts Nostr credentials, the output of the Form node, and the relays from the Set node, and posts the Note to those relays.
Once the workflow is built, you can test it with the testing form URL, and set it to "Active" to use the production form URL. That's it. You can now give posting access to anyone for any Npub. To revoke access, simply change the credentials or set to workflow to "Inactive".
It may also be the world's simplest Nostr client.
You can find the Nostr Form to Post a Note workflow here.
Push notifications on mentions and new notes
One of the things Nostr is not very good at is push notifications. Furthermore I have some unique itches to scratch. I want–
- To make sure I never miss a note addressed to any of my Npubs – For this I want a push notification any time any Nostr user mentions any of my Npubs,
- To make sure I always see all notes from key accounts – For this I need a push notification any time any of my Npubs post any Notes to the network,
- To get these notifications on all of my devices – Not just my phone where my Nostr regular client lives, but also on each of my laptops to suit wherever I am working that day.
I needed to build a Nostr push notifications solution.
To build this workflow I had to string a few ideas together–
- Triggering the node on a schedule – Nostrobots does not include a trigger node. As every workflow starts with a trigger we needed a different method. I elected to run the workflow on a schedule of every 10-minutes. Frequent enough to see Notes while they are hot, but infrequent enough to not burden public relays or get rate-limited,
- Storing a list of Npubs in a Nostr list – I needed a way to store the list of Npubs that trigger my notifications. I initially used an array defined in the workflow, this worked fine. Then I decided to try Nostr lists (NIP-51, kind 30000). By defining my list of Npubs as a list published to Nostr I can control my list from within a Nostr client (e.g. Listr.lol or Nostrudel.ninja). Not only does this "just work", but because it's based on Nostr lists automagically Amethyst client allows me to browse that list as a Feed, and everyone I add gets notified in their Mentions,
- Using specific relays – I needed to query the right relays, including my own HAVEN relay inbox for notes addressed to me, and wss://purplepag.es for Nostr profile metadata,
- Querying Nostr events (with Nostrobots) – I needed to make use of many different Nostr queries and use quite a wide range of what Nostrobots can do–
- I read the EventID of my Kind 30000 list, to return the desired pubkeys,
- For notifications on mentions, I read all Kind 1 notes that mention that pubkey,
- For notifications on new notes, I read all Kind 1 notes published by that pubkey,
- Where there are notes, I read the Kind 0 profile metadata event of that pubkey to get the displayName of the relevant Npub,
- I transform the EventID into a Nevent to help clients find it.
- Using the Nostr URI – As I did with my NFC card article, I created a link with the
nostr:
URI prefix so that my phone's native client opens the link by default, - Push notifications solution – I needed a push notifications solution. I found many with n8n integrations and chose to go with Pushover which supports all my devices, has a free trial, and is unfairly cheap with a $5-per-device perpetual license.
Once the workflow was built, lists published, and Pushover installed on my phone, I was fully set up with push notifications on Nostr. I have used these workflows for several weeks now and made various tweaks as I went. They are feeling robust and I'd welcome you to give them a go.
You can find the Nostr Push Notification If Mentioned here and If Posts a Note here.
In speaking with other Nostr users while I was building this, there are all kind of other needs for push notifications too – like on replies to a certain bookmarked note, or when a followed Npub starts streaming on zap.stream. These are all possible.
Use my workflows
I have open sourced all my workflows at my Github with MIT license and tried to write complete docs, so that you can import them into your n8n and configure them for your own use.
To import any of my workflows–
- Click on the workflow of your choice, e.g. "Nostr_Push_Notify_If_Mentioned.json",
- Click on the "raw" button to view the raw JSON, ex any Github page layout,
- Copy that URL,
- Enter that URL in the "import from URL" dialog mentioned above.
To configure them–
- Prerequisites, credentials, and variables are all stated,
- In general any variables required are entered into a Set Node that follows the trigger node,
- Pushover has some extra setup but is very straightforward and documented in the workflow.
What next?
Over my first four blogs I explored creating a good Nostr setup with Vanity Npub, Lightning Payments, Nostr Addresses at Your Domain, and Personal Nostr Relay.
Then in my latest two blogs I explored different types of interoperability with NFC cards and now n8n Workflow Automation.
Thinking ahead n8n can power any kind of interoperability between Nostr and any other legacy technology solution. On my mind as I write this:
- Further enhancements to posting and delegating solutions and forms (enhanced UI or different note kinds),
- Automated or scheduled posting (such as auto-liking everything Lyn Alden posts),
- Further enhancements to push notifications, on new and different types of events (such as notifying me when I get a new follower, on replies to certain posts, or when a user starts streaming),
- All kinds of bridges, such as bridging notes to and from Telegram, Slack, or Campfire. Or bridging RSS or other event feeds to Nostr,
- All kinds of other automation (such as BlackCoffee controlling a coffee machine),
- All kinds of AI Assistants and Agents,
In fact I have already released an open source workflow for an AI Assistant, and will share more about that in my next blog.
Please be sure to let me know if you think there's another Nostr topic you'd like to see me tackle.
GM Nostr.
-
@ b0a838f2:34ed3f19
2025-05-23 17:56:09- bewCloud - File sharing + sync, notes, and photos (alternative to Nextcloud and ownCloud's RSS reader). (Source Code, Clients)
AGPL-3.0
Docker
- Git Annex - File synchronization between computers, servers, external drives. (Source Code)
GPL-3.0
Haskell
- Kinto - Minimalist JSON storage service with synchronisation and sharing abilities. (Source Code)
Apache-2.0
Python
- Nextcloud - Access and share your files, calendars, contacts, mail and more from any device, on your terms. (Demo, Source Code)
AGPL-3.0
PHP/deb
- OpenSSH SFTP server - Secure File Transfer Program. (Source Code)
BSD-2-Clause
C/deb
- ownCloud - All-in-one solution for saving, synchronizing, viewing, editing and sharing files, calendars, address books and more. (Source Code, Clients)
AGPL-3.0
PHP/Docker/deb
- Peergos - Secure and private space online where you can store, share and view your photos, videos, music and documents. Also includes a calendar, news feed, task lists, chat and email client. (Source Code)
AGPL-3.0
Java
- Puter - Web-based operating system designed to be feature-rich, exceptionally fast, and highly extensible. (Demo, Source Code)
AGPL-3.0
Nodejs/Docker
- Pydio - Turn any web server into a powerful file management system and an alternative to mainstream cloud storage providers. (Demo, Source Code)
AGPL-3.0
Go
- Samba - Samba is the standard Windows interoperability suite of programs for Linux and Unix. It provides secure, stable and fast file and print services for all clients using the SMB/CIFS protocol. (Source Code)
GPL-3.0
C
- Seafile - File hosting and sharing solution primary for teams and organizations. (Source Code)
GPL-2.0/GPL-3.0/AGPL-3.0/Apache-2.0
C
- Syncthing - Syncthing is an open source peer-to-peer file synchronisation tool. (Source Code)
MPL-2.0
Go/Docker/deb
- Unison - Unison is a file-synchronization tool for OSX, Unix, and Windows. (Source Code)
GPL-3.0
deb/OCaml
- bewCloud - File sharing + sync, notes, and photos (alternative to Nextcloud and ownCloud's RSS reader). (Source Code, Clients)
-
@ f0c7506b:9ead75b8
2024-12-08 09:05:13Yalnızca güçlü olanların hakkıdır yaşamak.
Güçlü olan ileri gider ve saflar seyrekleşir. Ama üç beş büyük, güçlü ve tanrısal kişi güneşli ve aydınlık gözleriyle o yeni, o vaat edilmiş ülkeye ulaşacaktır. Belki binlerce yıl sonra ancak. Ve güçlü, adaleli, hükmetmek için yaratılmış elleriyle hastaların, zayıfların ve sakatların ölüleri üzerinde bir krallık kuracaklardır. Bir krallık!
Benim aradığım insanların kendileri değil, sesleridir.
Duyguları körelmiş, çeşitli düşüncelere saplanmış kalabalık hiçbir zaman ilerlemenin taşıyıcısı olamaz, kendi küçüklüğünün o küflü içgüdüsüyle kalabalığın kin ve nefretle baktığı bir kişi, bir büyük kişi, iradesinin gösterdiği yolda kimsenin gözünün yaşına bakmaksızın ilahi bir güç ve bir zafer gülümsemesiyle yürüyebilir ancak.
Bizim soyumuz da sonsuz oluşum piramidinin doruk noktasını oluşturmaktan uzaktır. Bizler de mükemmelliğe ulaşmış değiliz. Bizler de henüz olgunlaşmadık.
Şairler sevgiye övgüler döşenir; doğrusu sevginin güçlü bir şey olduğu kesin. Hüneşin bir ışınıdır sevgi, aydınlatıp nurlandırır insanı der bazıları; bazıları da insanı esrikliğe sürükleyen bir zehri kendisinde barındırdığını söyler. Gerçekten de yol açtığı sonuçlar, bir hekimin ağır bir ameliyattan önce korkudan titreyen hastaya teneffüs ettirdiği güldürücü gazınkine benzer, içinde tepinip duran acıyı unutturur hastaya.
Önemli olan, hayatta hiç değilse bir kez kutsal bir ilkbaharın yaşanmasıdır; öyle bir bahar ki, insanın gönlünü ilerideki bütün günleri altın yaldızla kaplamaya yetecek kadar ışık ve parıltıyla doldursun.
Şu hayat denen şey kötü bir işçiliğin ürünü, acemilere göre bir şey. Bu kepaze yaşam uğruna insan nelere katlanmıyor ki!
Kendisine sadakatten ayrılmadığı, yalnızca kendisinin olan bir tek bu var: Yalnızlığı.
Sahildeki üstü tenteli hasır koltuklar arkasındaki yüksek, sessiz kum tepeleri içinde yürürsen, tenteler altındaki insanları göremezsin; ama birinin bir diğerine seslendiğini, bir başkasının gevezelik ettiğini, bir ötekinin güldüğünü işitir ve anlarsın hemen: bu insan şöyle şöyle biridir diyebilirsin. Onun hayatı sevdiğini, bağrında büyük bir özlem ya da acı barındırdığını, bu acının da sesini ağlamaklı kıldığını her gülüşünde hissedersin.
-
@ e31e84c4:77bbabc0
2024-12-02 10:44:07Bitcoin and Fixed Income was Written By Wyatt O’Rourke. If you enjoyed this article then support his writing, directly, by donating to his lightning wallet: ultrahusky3@primal.net
Fiduciary duty is the obligation to act in the client’s best interests at all times, prioritizing their needs above the advisor’s own, ensuring honesty, transparency, and avoiding conflicts of interest in all recommendations and actions.
This is something all advisors in the BFAN take very seriously; after all, we are legally required to do so. For the average advisor this is a fairly easy box to check. All you essentially have to do is have someone take a 5-minute risk assessment, fill out an investment policy statement, and then throw them in the proverbial 60/40 portfolio. You have thousands of investment options to choose from and you can reasonably explain how your client is theoretically insulated from any move in the \~markets\~. From the traditional financial advisor perspective, you could justify nearly anything by putting a client into this type of portfolio. All your bases were pretty much covered from return profile, regulatory, compliance, investment options, etc. It was just too easy. It became the household standard and now a meme.
As almost every real bitcoiner knows, the 60/40 portfolio is moving into psyop territory, and many financial advisors get clowned on for defending this relic on bitcoin twitter. I’m going to specifically poke fun at the ‘40’ part of this portfolio.
The ‘40’ represents fixed income, defined as…
An investment type that provides regular, set interest payments, such as bonds or treasury securities, and returns the principal at maturity. It’s generally considered a lower-risk asset class, used to generate stable income and preserve capital.
Historically, this part of the portfolio was meant to weather the volatility in the equity markets and represent the “safe” investments. Typically, some sort of bond.
First and foremost, the fixed income section is most commonly constructed with U.S. Debt. There are a couple main reasons for this. Most financial professionals believe the same fairy tale that U.S. Debt is “risk free” (lol). U.S. debt is also one of the largest and most liquid assets in the market which comes with a lot of benefits.
There are many brilliant bitcoiners in finance and economics that have sounded the alarm on the U.S. debt ticking time bomb. I highly recommend readers explore the work of Greg Foss, Lawrence Lepard, Lyn Alden, and Saifedean Ammous. My very high-level recap of their analysis:
-
A bond is a contract in which Party A (the borrower) agrees to repay Party B (the lender) their principal plus interest over time.
-
The U.S. government issues bonds (Treasury securities) to finance its operations after tax revenues have been exhausted.
-
These are traditionally viewed as “risk-free” due to the government’s historical reliability in repaying its debts and the strength of the U.S. economy
-
U.S. bonds are seen as safe because the government has control over the dollar (world reserve asset) and, until recently (20 some odd years), enjoyed broad confidence that it would always honor its debts.
-
This perception has contributed to high global demand for U.S. debt but, that is quickly deteriorating.
-
The current debt situation raises concerns about sustainability.
-
The U.S. has substantial obligations, and without sufficient productivity growth, increasing debt may lead to a cycle where borrowing to cover interest leads to more debt.
-
This could result in more reliance on money creation (printing), which can drive inflation and further debt burdens.
In the words of Lyn Alden “Nothing stops this train”
Those obligations are what makes up the 40% of most the fixed income in your portfolio. So essentially you are giving money to one of the worst capital allocators in the world (U.S. Gov’t) and getting paid back with printed money.
As someone who takes their fiduciary responsibility seriously and understands the debt situation we just reviewed, I think it’s borderline negligent to put someone into a classic 60% (equities) / 40% (fixed income) portfolio without serious scrutiny of the client’s financial situation and options available to them. I certainly have my qualms with equities at times, but overall, they are more palatable than the fixed income portion of the portfolio. I don’t like it either, but the money is broken and the unit of account for nearly every equity or fixed income instrument (USD) is fraudulent. It’s a paper mache fade that is quite literally propped up by the money printer.
To briefly be as most charitable as I can – It wasn’t always this way. The U.S. Dollar used to be sound money, we used to have government surplus instead of mathematically certain deficits, The U.S. Federal Government didn’t used to have a money printing addiction, and pre-bitcoin the 60/40 portfolio used to be a quality portfolio management strategy. Those times are gone.
Now the fun part. How does bitcoin fix this?
Bitcoin fixes this indirectly. Understanding investment criteria changes via risk tolerance, age, goals, etc. A client may still have a need for “fixed income” in the most literal definition – Low risk yield. Now you may be thinking that yield is a bad word in bitcoin land, you’re not wrong, so stay with me. Perpetual motion machine crypto yield is fake and largely where many crypto scams originate. However, that doesn’t mean yield in the classic finance sense does not exist in bitcoin, it very literally does. Fortunately for us bitcoiners there are many other smart, driven, and enterprising bitcoiners that understand this problem and are doing something to address it. These individuals are pioneering new possibilities in bitcoin and finance, specifically when it comes to fixed income.
Here are some new developments –
Private Credit Funds – The Build Asset Management Secured Income Fund I is a private credit fund created by Build Asset Management. This fund primarily invests in bitcoin-backed, collateralized business loans originated by Unchained, with a secured structure involving a multi-signature, over-collateralized setup for risk management. Unchained originates loans and sells them to Build, which pools them into the fund, enabling investors to share in the interest income.
Dynamics
- Loan Terms: Unchained issues loans at interest rates around 14%, secured with a 2/3 multi-signature vault backed by a 40% loan-to-value (LTV) ratio.
- Fund Mechanics: Build buys these loans from Unchained, thus providing liquidity to Unchained for further loan originations, while Build manages interest payments to investors in the fund.
Pros
- The fund offers a unique way to earn income via bitcoin-collateralized debt, with protection against rehypothecation and strong security measures, making it attractive for investors seeking exposure to fixed income with bitcoin.
Cons
- The fund is only available to accredited investors, which is a regulatory standard for private credit funds like this.
Corporate Bonds – MicroStrategy Inc. (MSTR), a business intelligence company, has leveraged its corporate structure to issue bonds specifically to acquire bitcoin as a reserve asset. This approach allows investors to indirectly gain exposure to bitcoin’s potential upside while receiving interest payments on their bond investments. Some other publicly traded companies have also adopted this strategy, but for the sake of this article we will focus on MSTR as they are the biggest and most vocal issuer.
Dynamics
-
Issuance: MicroStrategy has issued senior secured notes in multiple offerings, with terms allowing the company to use the proceeds to purchase bitcoin.
-
Interest Rates: The bonds typically carry high-yield interest rates, averaging around 6-8% APR, depending on the specific issuance and market conditions at the time of issuance.
-
Maturity: The bonds have varying maturities, with most structured for multi-year terms, offering investors medium-term exposure to bitcoin’s value trajectory through MicroStrategy’s holdings.
Pros
-
Indirect Bitcoin exposure with income provides a unique opportunity for investors seeking income from bitcoin-backed debt.
-
Bonds issued by MicroStrategy offer relatively high interest rates, appealing for fixed-income investors attracted to the higher risk/reward scenarios.
Cons
-
There are credit risks tied to MicroStrategy’s financial health and bitcoin’s performance. A significant drop in bitcoin prices could strain the company’s ability to service debt, increasing credit risk.
-
Availability: These bonds are primarily accessible to institutional investors and accredited investors, limiting availability for retail investors.
Interest Payable in Bitcoin – River has introduced an innovative product, bitcoin Interest on Cash, allowing clients to earn interest on their U.S. dollar deposits, with the interest paid in bitcoin.
Dynamics
-
Interest Payment: Clients earn an annual interest rate of 3.8% on their cash deposits. The accrued interest is converted to Bitcoin daily and paid out monthly, enabling clients to accumulate Bitcoin over time.
-
Security and Accessibility: Cash deposits are insured up to $250,000 through River’s banking partner, Lead Bank, a member of the FDIC. All Bitcoin holdings are maintained in full reserve custody, ensuring that client assets are not lent or leveraged.
Pros
-
There are no hidden fees or minimum balance requirements, and clients can withdraw their cash at any time.
-
The 3.8% interest rate provides a predictable income stream, akin to traditional fixed-income investments.
Cons
-
While the interest rate is fixed, the value of the Bitcoin received as interest can fluctuate, introducing potential variability in the investment’s overall return.
-
Interest rate payments are on the lower side
Admittedly, this is a very small list, however, these types of investments are growing more numerous and meaningful. The reality is the existing options aren’t numerous enough to service every client that has a need for fixed income exposure. I challenge advisors to explore innovative options for fixed income exposure outside of sovereign debt, as that is most certainly a road to nowhere. It is my wholehearted belief and call to action that we need more options to help clients across the risk and capital allocation spectrum access a sound money standard.
Additional Resources
-
River: The future of saving is here: Earn 3.8% on cash. Paid in Bitcoin.
-
MicroStrategy: MicroStrategy Announces Pricing of Offering of Convertible Senior Notes
Bitcoin and Fixed Income was Written By Wyatt O’Rourke. If you enjoyed this article then support his writing, directly, by donating to his lightning wallet: ultrahusky3@primal.net
-
-
@ 3bf0c63f:aefa459d
2024-09-06 12:49:46Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between clients (i.e. the program that runs on a user computer) and relays (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
@ b0a838f2:34ed3f19
2025-05-23 17:55:49- Bubo Reader - Irrationally minimal RSS feed reader. (Demo)
MIT
Nodejs
- CommaFeed - Google Reader inspired self-hosted RSS reader. (Demo, Source Code)
Apache-2.0
Java/Docker
- FeedCord
⚠
- Simple, lightweight & customizable RSS News Feed for your Discord Server.MIT
Docker
- Feedpushr - Powerful RSS aggregator, able to transform and send articles to many outputs. Single binary, extensible with plugins.
GPL-3.0
Go/Docker
- Feeds Fun - News reader with tags, scoring, and AI. (Source Code)
BSD-3-Clause
Python
- FreshRSS - Self-hostable RSS feed aggregator. (Demo, Source Code)
AGPL-3.0
PHP/Docker
- Fusion - Lightweight RSS aggregator and reader.
MIT
Go/Docker
- JARR - JARR (Just Another RSS Reader) is a web-based news aggregator and reader (fork of Newspipe). (Demo, Source Code)
AGPL-3.0
Docker/Python
- Kriss Feed - Simple and smart (or stupid) feed reader.
CC0-1.0
PHP
- Leed - Leed (for Light Feed) is a Free and minimalist RSS aggregator.
AGPL-3.0
PHP
- Miniflux - Minimalist news reader. (Source Code)
Apache-2.0
Go/deb/Docker
- NewsBlur - Personal news reader that brings people together to talk about the world. A new sound of an old instrument. (Source Code)
MIT
Python
- Newspipe - Web news reader. (Demo)
AGPL-3.0
Python
- Precis - Extensibility-oriented RSS reader that can use LLMs (including local LLMs) to summarize RSS entries with built-in notification support.
MIT
Python/Docker
- reader - A Python feed reader web app and library (so you can use it to build your own), with only standard library and pure-Python dependencies.
BSD-3-Clause
Python
- Readflow - Lightweight news reader with modern interface and features: full-text search, automatic categorization, archiving, offline support, notifications... (Source Code)
MIT
Go/Docker
- RSS-Bridge - Generate RSS/ATOM feeds for websites which don't have one.
Unlicense
PHP/Docker
- RSS Monster - An easy to use web-based RSS aggregator and reader compatible with the Fever API (alternative to Google Reader).
MIT
PHP
- RSS2EMail - Fetches RSS/Atom-feeds and pushes new Content to any email-receiver, supports OPML.
GPL-2.0
Python/deb
- RSSHub - An easy to use, and extensible RSS feed aggregator, it's capable of generating RSS feeds from pretty much everything ranging from social media to university departments. (Demo, Source Code)
MIT
Nodejs/Docker
- Selfoss - New multipurpose rss reader, live stream, mashup, aggregation web application. (Source Code)
GPL-3.0
PHP
- Stringer - Work-in-progress self-hosted, anti-social RSS reader.
MIT
Ruby
- Tiny Tiny RSS - Open source web-based news feed (RSS/Atom) reader and aggregator. (Demo, Source Code)
GPL-3.0
Docker/PHP
- Yarr - Yarr (yet another rss reader) is a web-based feed aggregator which can be used both as a desktop application and a personal self-hosted server.
MIT
Go
- Bubo Reader - Irrationally minimal RSS feed reader. (Demo)
-
@ 3bf0c63f:aefa459d
2024-05-21 12:38:08Bitcoin transactions explained
A transaction is a piece of data that takes inputs and produces outputs. Forget about the blockchain thing, Bitcoin is actually just a big tree of transactions. The blockchain is just a way to keep transactions ordered.
Imagine you have 10 satoshis. That means you have them in an unspent transaction output (UTXO). You want to spend them, so you create a transaction. The transaction should reference unspent outputs as its inputs. Every transaction has an immutable id, so you use that id plus the index of the output (because transactions can have multiple outputs). Then you specify a script that unlocks that transaction and related signatures, then you specify outputs along with a script that locks these outputs.
As you can see, there's this lock/unlocking thing and there are inputs and outputs. Inputs must be unlocked by fulfilling the conditions specified by the person who created the transaction they're in. And outputs must be locked so anyone wanting to spend those outputs will need to unlock them.
For most of the cases locking and unlocking means specifying a public key whose controller (the person who has the corresponding private key) will be able to spend. Other fancy things are possible too, but we can ignore them for now.
Back to the 10 satoshis you want to spend. Since you've successfully referenced 10 satoshis and unlocked them, now you can specify the outputs (this is all done in a single step). You can specify one output of 10 satoshis, two of 5, one of 3 and one of 7, three of 3 and so on. The sum of outputs can't be more than 10. And if the sum of outputs is less than 10 the difference goes to fees. In the first days of Bitcoin you didn't need any fees, but now you do, otherwise your transaction won't be included in any block.
If you're still interested in transactions maybe you could take a look at this small chapter of that Andreas Antonopoulos book.
If you hate Andreas Antonopoulos because he is a communist shitcoiner or don't want to read more than half a page, go here: https://en.bitcoin.it/wiki/Coin_analogy
-
@ b0a838f2:34ed3f19
2025-05-23 17:55:31- Aimeos - E-commerce framework for building custom online shops, market places and complex B2B applications scaling to billions of items with Laravel. (Demo, Source Code)
LGPL-3.0/MIT
PHP
- Bagisto - Leading Laravel open source e-commerce framework with multi-inventory sources, taxation, localization, dropshipping and more exciting features. (Demo, Source Code)
MIT
PHP
- CoreShop - E-commerce plugin for Pimcore. (Source Code)
GPL-3.0
PHP
- Drupal Commerce - Popular e-commerce module for Drupal CMS, with support for dozens of payment, shipping, and shopping related modules. (Source Code)
GPL-2.0
PHP
- EverShop
⚠
- E-commerce platform with essential commerce features. Modular architecture and fully customizable. (Demo, Source Code)GPL-3.0
Docker/Nodejs
- Litecart
⚠
- Shopping cart in 1 file (with support for payment by card or cryptocurrency).MIT
Go/Docker
- Magento Open Source - Leading provider of open omnichannel innovation. (Source Code)
OSL-3.0
PHP
- MedusaJs - Headless commerce engine that enables developers to create amazing digital commerce experiences. (Demo, Source Code)
MIT
Nodejs
- Microweber - Drag and Drop CMS and online shop. (Source Code)
MIT
PHP
- Open Source POS - Open Source Point of Sale is a web based point of sale system.
MIT
PHP
- OpenCart - Shopping cart solution. (Source Code)
GPL-3.0
PHP
- PrestaShop - Fully scalable e-commerce solution. (Demo, Source Code)
OSL-3.0
PHP
- Pretix - Ticket sales platform for events. (Source Code)
AGPL-3.0
Python/Docker
- s-cart - S-Cart is a free e-commerce website project for individuals and businesses, built on top of Laravel Framework. (Demo, Source Code)
MIT
PHP
- Saleor - Django based open-sourced e-commerce storefront. (Demo, Source Code)
BSD-3-Clause
Docker/Python
- Shopware Community Edition - PHP based open source e-commerce software made in Germany. (Demo, Source Code)
MIT
PHP
- Solidus - A free, open-source ecommerce platform that gives you complete control over your store. (Source Code)
BSD-3-Clause
Ruby/Docker
- Spree Commerce - Spree is a complete, modular & API-driven open source e-commerce solution for Ruby on Rails. (Demo, Source Code)
BSD-3-Clause
Ruby
- Sylius - Symfony2 powered open source full-stack platform for eCommerce. (Demo, Source Code)
MIT
PHP
- Thelia - Thelia is an open source and flexible e-commerce solution. (Demo, Source Code)
LGPL-3.0
PHP
- Vendure - A headless commerce framework. (Demo, Source Code)
MIT
Nodejs
- WooCommerce - WordPress based e-commerce solution. (Source Code)
GPL-3.0
PHP
- Aimeos - E-commerce framework for building custom online shops, market places and complex B2B applications scaling to billions of items with Laravel. (Demo, Source Code)
-
@ 0176967e:1e6f471e
2024-07-28 15:31:13Objavte, ako avatari a pseudonymné identity ovplyvňujú riadenie kryptokomunít a decentralizovaných organizácií (DAOs). V tejto prednáške sa zameriame na praktické fungovanie decentralizovaného rozhodovania, vytváranie a správu avatarových profilov, a ich rolu v online reputačných systémoch. Naučíte sa, ako si vytvoriť efektívny pseudonymný profil, zapojiť sa do rôznych krypto projektov a využiť svoje aktivity na zarábanie kryptomien. Preskúmame aj príklady úspešných projektov a stratégie, ktoré vám pomôžu zorientovať sa a uspieť v dynamickom svete decentralizovaných komunít.
-
@ e771af0b:8e8ed66f
2024-09-05 22:14:04I have searched the web for "Find bitcoin block number from historical date" maybe 100 times in my life.
Never again.
You'll need a bitcoin node for this to work. The script is fast.
Install
WoT Install
one liner ```
curl
curl -o fjb.sh https://gist.githubusercontent.com/dskvr/18252c16cf85c06c1ee6cb5ae04a3197/raw/34bad6a35d98501978c8cd0c25b1628db1191cfe/fjb.sh && chmod +x fjb.sh
wget
wget -O fjb.sh https://gist.githubusercontent.com/dskvr/18252c16cf85c06c1ee6cb5ae04a3197/raw/34bad6a35d98501978c8cd0c25b1628db1191cfe/fjb.sh && chmod +x fjb.sh ```
Trust no one.
create file ``` vi fjb.sh
or
nano fjb.sh ```
review, copy and past into file ``` TIMESTAMP=$1 LOWER=0 UPPER=$(bitcoin-cli getblockcount)
while (( LOWER <= UPPER )); do MID=$(( (LOWER + UPPER) / 2 )) BLOCKHASH=$(bitcoin-cli getblockhash $MID) BLOCKTIME=$(bitcoin-cli getblockheader $BLOCKHASH | jq .time)
if (( BLOCKTIME < TIMESTAMP )); then LOWER=$(( MID + 1 )) elif (( BLOCKTIME > TIMESTAMP )); then UPPER=$(( MID - 1 )) else echo "$BLOCKTIME" exit 0 fi
done
echo "$UPPER" ```
give executable permissions
chmod +x ./fjb.sh
Usage
./fjb.sh <timestampSeconds>
It returns block number onlyExample
$: ./fjb.sh 1668779310 763719
-
@ b0a838f2:34ed3f19
2025-05-23 17:55:13- Evergreen - Highly-scalable software for libraries that helps library patrons find library materials, and helps libraries manage, catalog, and circulate those materials. (Source Code)
GPL-2.0
PLpgSQL
- Koha - Enterprise-class ILS with modules for acquisitions, circulation, cataloging, label printing, offline circulation for when Internet access is not available, and much more. (Demo, Source Code)
GPL-3.0
Perl
- RERO ILS - Large-scale ILS that can be run as a service with consortial features, intended primarily for library networks. Includes most standard modules (circulation, acquisitions, cataloging,...) and a web-based public and professional interface. (Demo, Source Code)
AGPL-3.0
Python/Docker
- Evergreen - Highly-scalable software for libraries that helps library patrons find library materials, and helps libraries manage, catalog, and circulate those materials. (Source Code)
-
@ 3bf0c63f:aefa459d
2024-03-23 08:57:08Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 0176967e:1e6f471e
2024-07-28 09:16:10Jan Kolčák pochádza zo stredného Slovenska a vystupuje pod umeleckým menom Deepologic. Hudbe sa venuje už viac než 10 rokov. Začínal ako DJ, ktorý s obľubou mixoval klubovú hudbu v štýloch deep-tech a afrohouse. Stále ho ťahalo tvoriť vlastnú hudbu, a preto sa začal vzdelávať v oblasti tvorby elektronickej hudby. Nakoniec vydal svoje prvé EP s názvom "Rezonancie". Učenie je pre neho celoživotný proces, a preto sa neustále zdokonaľuje v oblasti zvuku a kompozície, aby jeho skladby boli kvalitné na posluch aj v klube.
V roku 2023 si založil vlastnú značku EarsDeep Records, kde dáva príležitosť začínajúcim producentom. Jeho značku podporujú aj etablované mená slovenskej alternatívnej elektronickej scény. Jeho prioritou je sloboda a neškatulkovanie. Ako sa hovorí v jednej klasickej deephouseovej skladbe: "We are all equal in the house of deep." So slobodou ide ruka v ruke aj láska k novým technológiám, Bitcoinu a schopnosť udržať si v digitálnom svete prehľad, odstup a anonymitu.
V súčasnosti ďalej produkuje vlastnú hudbu, venuje sa DJingu a vedie podcast, kde zverejňuje svoje mixované sety. Na Lunarpunk festivale bude hrať DJ set tvorený vlastnou produkciou, ale aj skladby, ktoré sú blízke jeho srdcu.
Podcast Bandcamp Punk Nostr website alebo nprofile1qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qy88wumn8ghj7mn0wvhxcmmv9uq3xamnwvaz7tmsw4e8qmr9wpskwtn9wvhsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qyg8wumn8ghj7mn0wd68ytnddakj7qghwaehxw309aex2mrp0yh8qunfd4skctnwv46z7qpqguvns4ld8k2f3sugel055w7eq8zeewq7mp6w2stpnt6j75z60z3swy7h05
-
@ c1e9ab3a:9cb56b43
2025-05-18 04:14:48Abstract
This document proposes a novel architecture that decouples the peer-to-peer (P2P) communication layer from the Bitcoin protocol and replaces or augments it with the Nostr protocol. The goal is to improve censorship resistance, performance, modularity, and maintainability by migrating transaction propagation and block distribution to the Nostr relay network.
Introduction
Bitcoin’s current architecture relies heavily on its P2P network to propagate transactions and blocks. While robust, it has limitations in terms of flexibility, scalability, and censorship resistance in certain environments. Nostr, a decentralized event-publishing protocol, offers a multi-star topology and a censorship-resistant infrastructure for message relay.
This proposal outlines how Bitcoin communication could be ported to Nostr while maintaining consensus and verification through standard Bitcoin clients.
Motivation
- Enhanced Censorship Resistance: Nostr’s architecture enables better relay redundancy and obfuscation of transaction origin.
- Simplified Lightweight Nodes: Removing the full P2P stack allows for lightweight nodes that only verify blockchain data and communicate over Nostr.
- Architectural Modularity: Clean separation between validation and communication enables easier auditing, upgrades, and parallel innovation.
- Faster Propagation: Nostr’s multi-star network may provide faster propagation of transactions and blocks compared to the mesh-like Bitcoin P2P network.
Architecture Overview
Components
-
Bitcoin Minimal Node (BMN):
- Verifies blockchain and block validity.
- Maintains UTXO set and handles mempool logic.
- Connects to Nostr relays instead of P2P Bitcoin peers.
-
Bridge Node:
- Bridges Bitcoin P2P traffic to and from Nostr relays.
- Posts new transactions and blocks to Nostr.
- Downloads mempool content and block headers from Nostr.
-
Nostr Relays:
- Accept Bitcoin-specific event kinds (transactions and blocks).
- Store mempool entries and block messages.
- Optionally broadcast fee estimation summaries and tipsets.
Event Format
Proposed reserved Nostr
kind
numbers for Bitcoin content (NIP/BIP TBD):| Nostr Kind | Purpose | |------------|------------------------| | 210000 | Bitcoin Transaction | | 210001 | Bitcoin Block Header | | 210002 | Bitcoin Block | | 210003 | Mempool Fee Estimates | | 210004 | Filter/UTXO summary |
Transaction Lifecycle
- Wallet creates a Bitcoin transaction.
- Wallet sends it to a set of configured Nostr relays.
- Relays accept and cache the transaction (based on fee policies).
- Mining nodes or bridge nodes fetch mempool contents from Nostr.
- Once mined, a block is submitted over Nostr.
- Nodes confirm inclusion and update their UTXO set.
Security Considerations
- Sybil Resistance: Consensus remains based on proof-of-work. The communication path (Nostr) is not involved in consensus.
- Relay Discoverability: Optionally bootstrap via DNS, Bitcoin P2P, or signed relay lists.
- Spam Protection: Relay-side policy, rate limiting, proof-of-work challenges, or Lightning payments.
- Block Authenticity: Nodes must verify all received blocks and reject invalid chains.
Compatibility and Migration
- Fully compatible with current Bitcoin consensus rules.
- Bridge nodes preserve interoperability with legacy full nodes.
- Nodes can run in hybrid mode, fetching from both P2P and Nostr.
Future Work
- Integration with watch-only wallets and SPV clients using verified headers via Nostr.
- Use of Nostr’s social graph for partial trust assumptions and relay reputation.
- Dynamic relay discovery using Nostr itself (relay list events).
Conclusion
This proposal lays out a new architecture for Bitcoin communication using Nostr to replace or augment the P2P network. This improves decentralization, censorship resistance, modularity, and speed, while preserving consensus integrity. It encourages innovation by enabling smaller, purpose-built Bitcoin nodes and offloading networking complexity.
This document may become both a Bitcoin Improvement Proposal (BIP-XXX) and a Nostr Improvement Proposal (NIP-XXX). Event kind range reserved: 210000–219999.
-
@ b0a838f2:34ed3f19
2025-05-23 17:54:53- DSpace - Turnkey repository application providing durable access to digital resources. (Source Code)
BSD-3-Clause
Java
- EPrints - Digital document management system with a flexible metadata and workflow model primarily aimed at academic institutions. (Demo, Source Code)
GPL-3.0
Perl
- Fedora Commons Repository - Robust and modular repository system for the management and dissemination of digital content especially suited for digital libraries and archives, both for access and preservation. (Source Code)
Apache-2.0
Java
- InvenioRDM - Highly scalable turn-key research data management platform with a beautiful user experience. (Demo, Source Code, Clients)
MIT
Python
- Islandora - Drupal module for browsing and managing Fedora-based digital repositories. (Demo, Source Code)
GPL-3.0
PHP
- Samvera Hyrax - Front-end for the Samvera framework, which itself is a Ruby on Rails application for browsing and managing Fedora-based digital repositories. (Source Code)
Apache-2.0
Ruby
- DSpace - Turnkey repository application providing durable access to digital resources. (Source Code)
-
@ e771af0b:8e8ed66f
2024-04-19 22:29:43Have you ever seen a relay and out of curiosity visited the https canonical of a relay by swapping out the
wss
withhttps
? I sure have, and I believe others have too. When I ranhttps://nostr.sandwich.farm
in late 2022/2023, I had thousands of hits to my relay's https canonical. Since then, I've dreamed of improving the look and feel of these generic default landing pages.With the release of myrelay.page v0.2, relays can now host their own customizable micro-client at their https canonical.
Transform your relay's landing page from this:
or this:
to something like this:
I say "something like this" because each page is customizable at runtime via the page itself.
In a nutshell
myrelay.page is a self-configuring, Client-Side Rendered (CSR) micro-client specifically built to be hosted at relay canonicals, customizable at runtime via NIP-78. Check out a live example.
Features:
- Dark or light theme
- Join relay
- Relay operator profile and feed
- Zap relay operator
- See people you follow who are on the relay
- Customizable by the relay operator
- Enable/disable blocks
- Sort blocks
- Add HTML blocks
- Add image blocks
- Add markdown blocks
- Add feed blocks, with two layouts (grid/list) and customizable filters.
You can find a full list of features complete and todo here
Why I created myrelay.page
For several different reasons.
Firstly, the default, bland relay pages always seemed like a missed opportunity. I jotted down an idea to build a relay micro-client in early January 2023, but never had the time to start it.
Next, I've been ramping up the refactor of nostr.watch and first need to catch up on client-side technologies and validate a few of my ideas. To do this, I have been conducting short research & development projects to prepare and validate ideas before integrating them into an app I intend to support long-term. One of those R&D projects is myrelay.page.
Additionally, I wanted to explore NIP-78 a bit more, a NIP that came into fruition after a conversation I had with @fiatjaf on February 23rd, 2023. It stemmed from the desire to store application-specific data for app customization. I have seen clients use NIP-78, but from what I've seen, their implementations are limited and do not demonstrate the full potential of NIP-78. There's more on NIP-78 towards the end of this article
The convergence of these needs and ideas, in addition to having an itch I needed to scratch, resulted in the creation of myrelay.page.
*Could be wrong, please let me know in the comments if you have examples of nostr clients that utilize NIP-78 for propagating customizations to other visitors.
Editor Flow
Now I'm going to give you a brief example of the Editor Flow on myrelay.page. There's a lot that isn't covered here, but I want to be as brief as possible.
Note: myrelay.page is alpha, there are bugs, quality of life issues and things are far from perfect.
Login
Presently, myrelay.page only supports NIP-07 authentication, but other authentication methods will be implemented at a later date.
In order to customize your page, you need to have a valid NIP-11 document that provides a valid hex
pubkey
value that is the same as the key you use to login.Click "Edit"
Add a block
For brevity, I'm going to add a markdown block
Configure the block
Add a title to the block and a sentence with markdown syntax.
Publish the configuration
Click publish and confirm the event, once it's been published to relays the page will refresh.
Note: Again it's alpha, so the page doesn't refresh after a few seconds, the publish probably failed. Press publish until it refreshes. Error handling here will improve with time.
Confirm state persistence
After reload, you should see your block persisted. Anyone who visits your page will see your newly configured page. Big caveat: Given the blessing of relays who store your configuration note, if your configuration cannot be found or you cannot connect to your relays, visitors will only see your relay's NIP-11.
Interested?
myrelay.page is alpha and only has two releases, so if you want to be an early adopter, you'll need the skillset and patience of an early adopter. That said, as long as you have some basic development and sysadmin skills as well as understand your reverse-proxy of choice, it's a quick, easy and low-risk side project that can be completed in about 20 minutes.
1. Build
yarn build
ornpm run build
orpnpm run build
(note: I had issues with pnpm and cannot guarantee they are resolved!)2. Deploy
Move the contents of
build
folder to your relay server (or another server that you can reverse-proxy to from your relay)3. Update your reverse-proxy configuration
You'll need to split your relay traffic from the http traffic, this ranges from easy to difficult, depending on your server of choice. - caddy: By far the easiest, see an example configuration for strfry here (easily adapted by those with experience to other relay software) - nginx: A little more stubborn, here's the most recent nginx config I got to work. You'll need to serve the static site from an internal port (
8080
in the aforementioned nginx conf) - haproxy: Should be easier than nginx or maybe even caddy, haven't tried yet. - no reverse-proxy: shrugsIf any of that's over your head, I'll be providing detailed guides for various deployment shapes within the next few weeks.
Exploring NIP-78
One of the special things about NIP-78 is that it is application specific, meaning, you don't need to conform to any existing NIP to make magic happen. Granted there are limits to this, as interoperability reigns supreme on nostr. However, there are many use cases where interoperability is not particularly desirable nor beneficial. It doesn't change the care needed to craft events, but it does enable a bunch of unique opportunities.
- A nostr client that is fully configurable and customized by the user.
- A nostr powered CMS that can be edited entirely on the client-side.
- Any use case where an application has special functionality or complex data structures that present no benefit in the context of interoperability (since they are "Application Specific").
Final thoughts
I was surprised at how quickly I was able to get myrelay.page customizable and loading within an acceptable timeframe;
NIP-11
, the operator'sNIP-65
and the myrelay.pageNIP-78
events all need to be fetched before the page is hydrated! While there is much to do around optimization, progressive page-loading, and general functionality, I'm very happy with the outcome of this short side project.I'll be shifting my focus over to another micro-app to validate a few concepts, and then on to the next nostr.watch. Rebuilding nostr.watch has been a high-priority item since shortly after Jack lit a flame under nostr in late 2022, but due to personal circumstances in 2023, I was unable to tackle it. Thanks to @opensats I am able to realize my ideas and explore ideas that have been keeping me up at night for a year or more.
Also, if you're a relay developer and are curious about making it easier for developers to deploy myrelay.page, get in touch.
Next article will likely be about the micro-app I briefly mentioned and nostr.watch. Until then, be well.
-
@ 0176967e:1e6f471e
2024-07-27 11:10:06Workshop je zameraný pre všetkých, ktorí sa potýkajú s vysvetľovaním Bitcoinu svojej rodine, kamarátom, partnerom alebo kolegom. Pri námietkach z druhej strany väčšinou ideme do protiútoku a snažíme sa vytiahnuť tie najlepšie argumenty. Na tomto workshope vás naučím nový prístup k zvládaniu námietok a vyskúšate si ho aj v praxi. Know-how je aplikovateľné nie len na komunikáciu Bitcoinu ale aj pre zlepšenie vzťahov, pri výchove detí a celkovo pre lepší osobný život.
-
@ b0a838f2:34ed3f19
2025-05-23 17:54:34- Atsumeru - Manga/comic/light novel media server with clients for Windows, Linux, macOS and Android. (Source Code, Clients)
MIT
Java/Docker
- BookLogr - Manage your personal book library with ease. (Demo)
Apache-2.0
Docker
- Calibre Web - Browse, read and download eBooks using an existing Calibre database.
GPL-3.0
Python
- Calibre - E-book library manager that can view, convert, and catalog e-books in most of the major e-book formats and provides a built-in Web server for remote clients. (Demo, Source Code)
GPL-3.0
Python/deb
- Kapowarr - Build and manage a comic book library. Download, rename, move and convert issues of the volume to your liking. (Source Code)
GPL-3.0
Docker/Python
- Kavita - Cross-platform e-book/manga/comic/pdf server and web reader with user management, ratings and reviews, and metadata support. (Demo, Source Code)
GPL-3.0
.NET/Docker
- kiwix-serve - HTTP daemon for serving wikis from ZIM files. (Source Code)
GPL-3.0
C++
- Komga - Media server for comics/mangas/BDs with API and OPDS support, a modern web interface for exploring your libraries, as well as a web reader. (Source Code)
MIT
Java/Docker
- Librum - Modern e-book reader and library manager that supports most major book formats, runs on all devices and offers great tools to boost productivity. (Source Code)
GPL-3.0
C++
- Stump - A fast, free and open source comics, manga and digital book server with OPDS support. (Source Code)
MIT
Rust
- The Epube - Self-hosted web EPUB reader using EPUB.js, Bootstrap, and Calibre. (Source Code)
GPL-3.0
PHP
- Atsumeru - Manga/comic/light novel media server with clients for Windows, Linux, macOS and Android. (Source Code, Clients)
-
@ 3bf0c63f:aefa459d
2024-03-19 14:32:01Censorship-resistant relay discovery in Nostr
In Nostr is not decentralized nor censorship-resistant I said Nostr is centralized. Peter Todd thinks it is centralized by design, but I disagree.
Nostr wasn't designed to be centralized. The idea was always that clients would follow people in the relays they decided to publish to, even if it was a single-user relay hosted in an island in the middle of the Pacific ocean.
But the Nostr explanations never had any guidance about how to do this, and the protocol itself never had any enforcement mechanisms for any of this (because it would be impossible).
My original idea was that clients would use some undefined combination of relay hints in reply tags and the (now defunct)
kind:2
relay-recommendation events plus some form of manual action ("it looks like Bob is publishing on relay X, do you want to follow him there?") to accomplish this. With the expectation that we would have a better idea of how to properly implement all this with more experience, Branle, my first working client didn't have any of that implemented, instead it used a stupid static list of relays with read/write toggle -- although it did publish relay hints and kept track of those internally and supportedkind:2
events, these things were not really useful.Gossip was the first client to implement a truly censorship-resistant relay discovery mechanism that used NIP-05 hints (originally proposed by Mike Dilger) relay hints and
kind:3
relay lists, and then with the simple insight of NIP-65 that got much better. After seeing it in more concrete terms, it became simpler to reason about it and the approach got popularized as the "gossip model", then implemented in clients like Coracle and Snort.Today when people mention the "gossip model" (or "outbox model") they simply think about NIP-65 though. Which I think is ok, but too restrictive. I still think there is a place for the NIP-05 hints,
nprofile
andnevent
relay hints and specially relay hints in event tags. All these mechanisms are used together in ZBD Social, for example, but I believe also in the clients listed above.I don't think we should stop here, though. I think there are other ways, perhaps drastically different ways, to approach content propagation and relay discovery. I think manual action by users is underrated and could go a long way if presented in a nice UX (not conceived by people that think users are dumb animals), and who knows what. Reliance on third-parties, hardcoded values, social graph, and specially a mix of multiple approaches, is what Nostr needs to be censorship-resistant and what I hope to see in the future.
-
@ b0a838f2:34ed3f19
2025-05-23 17:54:16- DocKing - Document management service/microservice that handles templates and renders them in PDF format, all in one place. (Demo, Source Code)
MIT
PHP/Nodejs/Docker
- Docspell - Auto-tagging document organizer and archive. (Source Code)
GPL-3.0
Scala/Java/Docker
- Documenso - Digital document signing platform (alternative to DocuSign). (Source Code)
AGPL-3.0
Nodejs/Docker
- Docuseal - Create, fill, and sign digital documents (alternative to DocuSign). (Demo, Source Code)
AGPL-3.0
Docker
- EveryDocs - Simple Document Management System for private use with basic functionality to organize your documents digitally.
GPL-3.0
Docker/Ruby
- Gotenberg - Developer-friendly API to interact with powerful tools like Chromium and LibreOffice for converting numerous document formats (HTML, Markdown, Word, Excel, etc.) into PDF files, and more. (Source Code)
MIT
Docker
- I, Librarian - Organize PDF papers and office documents. It provides a lot of extra features for students and research groups both in industry and academia. (Demo, Source Code)
GPL-3.0
PHP
- Mayan EDMS - Electronic document management system for your documents with preview generation, OCR, and automatic categorization among other features. (Source Code)
GPL-2.0
Docker/K8S
- OpenSign
⚠
- Document signing software (alternative to DocuSign). (Source Code)AGPL-3.0
Nodejs/Docker
- Paperless-ngx - Scan, index, and archive all of your paper documents with an improved interface (fork of Paperless). (Demo, Source Code)
GPL-3.0
Python/Docker
- Papermerge - Document management system focused on scanned documents (electronic archives). Features file browsing in similar way to dropbox/google drive. OCR, full text search, text overlay/selection. (Source Code)
Apache-2.0
Docker/K8S
- PdfDing - PDF manager, viewer and editor offering a seamless user experience on multiple devices. It's designed to be minimal, fast, and easy to set up using Docker.
AGPL-3.0
Docker
- SeedDMS - Document Management System with workflows, access rights, fulltext search, and more. (Demo, Source Code)
GPL-2.0
PHP
- Stirling-PDF - Local hosted web application that allows you to perform various operations on PDF files, such as merging, splitting, file conversions and OCR.
Apache-2.0
Docker/Java
- Teedy - Lightweight document management system packed with all the features you can expect from big expensive solutions (Ex SismicsDocs). (Demo, Source Code)
GPL-2.0
Docker/Java
- DocKing - Document management service/microservice that handles templates and renders them in PDF format, all in one place. (Demo, Source Code)
-
@ 0176967e:1e6f471e
2024-07-26 17:45:08Ak ste v Bitcoine už nejaký ten rok, možno máte pocit, že už všetkému rozumiete a že vás nič neprekvapí. Viete čo je to peňaženka, čo je to seed a čo adresa, možno dokonca aj čo je to sha256. Ste si istí? Táto prednáška sa vám to pokúsi vyvrátiť. 🙂
-
@ b0a838f2:34ed3f19
2025-05-23 17:53:54- AdGuard Home - User-friendly ads & trackers blocking DNS server. (Source Code)
GPL-3.0
Docker
- blocky - Fast and lightweight DNS proxy as ad-blocker for local network with many features (alternative to Pi-hole). (Source Code)
Apache-2.0
Go/Docker
- Maza ad blocking - Local ad blocker. Like Pi-hole but local and using your operating system. (Source Code)
Apache-2.0
Shell
- Pi-hole - Blackhole for Internet advertisements with a GUI for management and monitoring. (Source Code)
EUPL-1.2
Shell/PHP/Docker
- Technitium DNS Server - Authoritative/recursive DNS server with ad blocking functionality. (Source Code)
GPL-3.0
Docker/C#
- AdGuard Home - User-friendly ads & trackers blocking DNS server. (Source Code)
-
@ e771af0b:8e8ed66f
2023-11-28 03:49:11The last 6 months has been full full of unexpected surprises, of the unpleasant variety. So it was welcomed news that
nostrwatch
was awarded a grant by OpenSats recently which has allowed me to revisit this project and build out my original vision, the one I had before nostr went vertical.The last month has been a bit of a joyride, in that I have met little friction meeting goals. For example I was able to rewrite
nostrwatch-js
in an evening, tests another evening and did most of the Typescript conversion in 30 minutes (thanks AI!)Here's a quick summary of the last month, I'm probably missing a handful of details, but these are the important bits.
- Went through all the new NIPs, prototyped and experimented
- Nearly completed a full rewrite of
nostrwatch-js
, nownocap
. Presently porting to Typescript. Still alpha, don’t use it. - Released
nostrawl
, a wrapper fornostr-fetch
that has Queue adapters for persistent fetching. Nothing special, but needed to factor out the functionality from the legacy trawler and the requirements were general enough to make it a package. - Starting porting the nostr.watch trawler, cache layer and rest API, but much work is still needed there. Aiming to have the current infrastructure updated with new daemons by end of year.
- In concert with the cache layer testing, I ran some experiments by patching in a rough trend engine prototype from earlier this year and a novel outbox solution, both with promising results.
- Wrote a few small CLI tools to assist with developments, got acquainted with
ink
, and reacquainted myself with wonderful world of ES Module conflicts.
The rest of this year will be mostly backend work, tests and data engineering. Early next year I will begin iterative development on a new GUI and a string of CLI tools.
Luckily, nostr.watch and it's underlying infrastructure, as abysmal as it is, hasn't completely sank. Traffic is much lower than the ATH but still hovering between 2-3k unique visitors a month. I can't express in words how happy I will be when I can put the legacy nostr.watch to rest.
Next update will include a bit more of the why and how, and where I am trying to get to with this project.
Until next time.
-
@ 0176967e:1e6f471e
2024-07-26 12:15:35Bojovať s rakovinou metabolickou metódou znamená použiť metabolizmus tela proti rakovine. Riadenie cukru a ketónov v krvi stravou a pohybom, časovanie rôznych typov cvičení, včasná kombinácia klasickej onko-liečby a hladovania. Ktoré vitamíny a suplementy prijímam a ktorým sa napríklad vyhýbam dajúc na rady mojej dietologičky z USA Miriam (ktorá sa špecializuje na rakovinu).
Hovori sa, že čo nemeriame, neriadime ... Ja som meral, veľa a dlho ... aj grafy budú ... aj sranda bude, hádam ... 😉
-
@ 0176967e:1e6f471e
2024-07-26 09:50:53Predikčné trhy predstavujú praktický spôsob, ako môžeme nahliadnuť do budúcnosti bez nutnosti spoliehať sa na tradičné, často nepresné metódy, ako je veštenie z kávových zrniek. V prezentácii sa ponoríme do histórie a vývoja predikčných trhov, a popíšeme aký vplyv mali a majú na dostupnosť a kvalitu informácií pre širokú verejnosť, a ako menia trh s týmito informáciami. Pozrieme sa aj na to, ako tieto trhy umožňujú obyčajným ľuďom prístup k spoľahlivým predpovediam a ako môžu prispieť k lepšiemu rozhodovaniu v rôznych oblastiach života.
-
@ 3bf0c63f:aefa459d
2024-01-29 02:19:25Nostr: a quick introduction, attempt #1
Nostr doesn't have a material existence, it is not a website or an app. Nostr is just a description what kind of messages each computer can send to the others and vice-versa. It's a very simple thing, but the fact that such description exists allows different apps to connect to different servers automatically, without people having to talk behind the scenes or sign contracts or anything like that.
When you use a Nostr client that is what happens, your client will connect to a bunch of servers, called relays, and all these relays will speak the same "language" so your client will be able to publish notes to them all and also download notes from other people.
That's basically what Nostr is: this communication layer between the client you run on your phone or desktop computer and the relay that someone else is running on some server somewhere. There is no central authority dictating who can connect to whom or even anyone who knows for sure where each note is stored.
If you think about it, Nostr is very much like the internet itself: there are millions of websites out there, and basically anyone can run a new one, and there are websites that allow you to store and publish your stuff on them.
The added benefit of Nostr is that this unified "language" that all Nostr clients speak allow them to switch very easily and cleanly between relays. So if one relay decides to ban someone that person can switch to publishing to others relays and their audience will quickly follow them there. Likewise, it becomes much easier for relays to impose any restrictions they want on their users: no relay has to uphold a moral ground of "absolute free speech": each relay can decide to delete notes or ban users for no reason, or even only store notes from a preselected set of people and no one will be entitled to complain about that.
There are some bad things about this design: on Nostr there are no guarantees that relays will have the notes you want to read or that they will store the notes you're sending to them. We can't just assume all relays will have everything — much to the contrary, as Nostr grows more relays will exist and people will tend to publishing to a small set of all the relays, so depending on the decisions each client takes when publishing and when fetching notes, users may see a different set of replies to a note, for example, and be confused.
Another problem with the idea of publishing to multiple servers is that they may be run by all sorts of malicious people that may edit your notes. Since no one wants to see garbage published under their name, Nostr fixes that by requiring notes to have a cryptographic signature. This signature is attached to the note and verified by everybody at all times, which ensures the notes weren't tampered (if any part of the note is changed even by a single character that would cause the signature to become invalid and then the note would be dropped). The fix is perfect, except for the fact that it introduces the requirement that each user must now hold this 63-character code that starts with "nsec1", which they must not reveal to anyone. Although annoying, this requirement brings another benefit: that users can automatically have the same identity in many different contexts and even use their Nostr identity to login to non-Nostr websites easily without having to rely on any third-party.
To conclude: Nostr is like the internet (or the internet of some decades ago): a little chaotic, but very open. It is better than the internet because it is structured and actions can be automated, but, like in the internet itself, nothing is guaranteed to work at all times and users many have to do some manual work from time to time to fix things. Plus, there is the cryptographic key stuff, which is painful, but cool.
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente. - **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados. - **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua. - **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer. - **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos. - **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas. - **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade). - **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis. - **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade. - **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho. - **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho. - **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma. - **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças. - **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família. - **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente. - **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo). - **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas. - **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente). - **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica. - **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar. - **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico. - **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais. - **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil). - **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória. - **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles. - **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas. - **inflação**: a inflação estimula o endividamento e o consumismo. - **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas. - **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16bitcoind
decentralizationIt is better to have multiple curator teams, with different vetting processes and release schedules for
bitcoind
than a single one."More eyes on code", "Contribute to Core", "Everybody should audit the code".
All these points repeated again and again fell to Earth on the day it was discovered that Bitcoin Core developers merged a variable name change from "blacklist" to "blocklist" without even discussing or acknowledging the fact that that innocent pull request opened by a sybil account was a social attack.
After a big lot of people manifested their dissatisfaction with that event on Twitter and on GitHub, most Core developers simply ignored everybody's concerns or even personally attacked people who were complaining.
The event has shown that:
1) Bitcoin Core ultimately rests on the hands of a couple maintainers and they decide what goes on the GitHub repository[^pr-merged-very-quickly] and the binary releases that will be downloaded by thousands; 2) Bitcoin Core is susceptible to social attacks; 2) "More eyes on code" don't matter, as these extra eyes can be ignored and dismissed.
Solution:
bitcoind
decentralizationIf usage was spread across 10 different
bitcoind
flavors, the network would be much more resistant to social attacks to a single team.This has nothing to do with the question on if it is better to have multiple different Bitcoin node implementations or not, because here we're basically talking about the same software.
Multiple teams, each with their own release process, their own logo, some subtle changes, or perhaps no changes at all, just a different name for their
bitcoind
flavor, and that's it.Every day or week or month or year, each flavor merges all changes from Bitcoin Core on their own fork. If there's anything suspicious or too leftist (or perhaps too rightist, in case there's a leftist
bitcoind
flavor), maybe they will spot it and not merge.This way we keep the best of both worlds: all software development, bugfixes, improvements goes on Bitcoin Core, other flavors just copy. If there's some non-consensus change whose efficacy is debatable, one of the flavors will merge on their fork and test, and later others -- including Core -- can copy that too. Plus, we get resistant to attacks: in case there is an attack on Bitcoin Core, only 10% of the network would be compromised. the other flavors would be safe.
Run Bitcoin Knots
The first example of a
bitcoind
software that follows Bitcoin Core closely, adds some small changes, but has an independent vetting and release process is Bitcoin Knots, maintained by the incorruptible Luke DashJr.Next time you decide to run
bitcoind
, run Bitcoin Knots instead and contribute tobitcoind
decentralization!
See also:
[^pr-merged-very-quickly]: See PR 20624, for example, a very complicated change that could be introducing bugs or be a deliberate attack, merged in 3 days without time for discussion.
-
@ b0a838f2:34ed3f19
2025-05-23 17:53:35- Adminer - Database management in a single PHP file. Available for MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, Elasticsearch, MongoDB and others. (Source Code)
Apache-2.0/GPL-2.0
PHP
- Azimutt - Visual database exploration made for real world databases (big and messy). Explore your database schema as well as data, document them, extend them and even get analysis and guidelines. (Demo, Source Code)
MIT
Elixir/Nodejs/Docker
- Baserow - Create your own database without technical experience (alternative to Airtable). (Source Code)
MIT
Docker
- Bytebase - Safe database schema change and version control for DevOps teams, supports MySQL, PostgreSQL, TiDB, ClickHouse, and Snowflake. (Demo, Source Code)
MIT
Docker/K8S/Go
- Chartbrew - Connect directly to databases and APIs and use the data to create beautiful charts. (Demo, Source Code)
MIT
Nodejs/Docker
- ChartDB - Database diagrams editor that allows you to visualize and design your DB with a single query. (Demo, Source Code)
AGPL-3.0
Nodejs/Docker
- CloudBeaver - Manage databases, supports PostgreSQL, MySQL, SQLite and more. A web/hosted version of DBeaver. (Source Code)
Apache-2.0
Docker
- Databunker - Network-based, self-hosted, GDPR compliant, secure database for personal data or PII. (Source Code)
MIT
Docker
- Datasette - Explore and publish data with easy import and export and database management. (Demo, Source Code)
Apache-2.0
Python/Docker
- Evidence - Code-based BI tool. Write reports using SQL and markdown and they render as a website. (Source Code)
MIT
Nodejs
- Limbas - Database framework for creating database-driven business applications. As a graphical database frontend, it enables the efficient processing of data stocks and the flexible development of comfortable database applications. (Source Code)
GPL-2.0
PHP
- Mathesar - Intuitive UI to manage data collaboratively, for users of all technical skill levels. Built on Postgres – connect an existing DB or set up a new one. (Source Code)
GPL-3.0
Docker/Python
- NocoDB - No-code platform that turns any database into a smart spreadsheet (alternative to Airtable or Smartsheet). (Source Code)
AGPL-3.0
Nodejs/Docker
- WebDB - Efficient database IDE. (Demo, Source Code)
AGPL-3.0
Docker
- Adminer - Database management in a single PHP file. Available for MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, Elasticsearch, MongoDB and others. (Source Code)
-
@ 0176967e:1e6f471e
2024-07-25 20:53:07AI hype vnímame asi všetci okolo nás — už takmer každá appka ponúka nejakú “AI fíčuru”, AI startupy raisujú stovky miliónov a Európa ako obvykle pracuje na regulovaní a našej ochrane pred nebezpečím umelej inteligencie. Pomaly sa ale ukazuje “ovocie” spojenia umelej inteligencie a človeka, kedy mnohí ľudia reportujú signifikantné zvýšenie produktivity v práci ako aj kreatívnych aktivitách (aj napriek tomu, že mnohí hardcore kreatívci by každého pri spomenutí skratky “AI” najradšej upálili). V prvej polovici prednášky sa pozrieme na to, akými rôznymi spôsobmi nám vie byť AI nápomocná, či už v práci alebo osobnom živote.
Umelé neuróny nám už vyskakujú pomaly aj z ovsených vločiek, no to ako sa k nám dostávajú sa veľmi líši. Hlavne v tom, či ich poskytujú firmy v zatvorených alebo open-source modeloch. V druhej polovici prednášky sa pozrieme na boom okolo otvorených AI modelov a ako ich vieme využiť.
-
@ b0a838f2:34ed3f19
2025-05-23 17:53:11- Corteza - CRM including a unified workspace, enterprise messaging and a low code environment for rapidly and securely delivering records-based management solutions. (Demo, Source Code)
Apache-2.0
Go
- Django-CRM - Analytical CRM with tasks management, email marketing and many more. Django CRM is built for individual use, businesses of any size or freelancers and is designed to provide easy customization and quick development.
AGPL-3.0
Python
- EspoCRM - CRM with a frontend designed as a single page application, and a REST API. (Demo, Source Code)
AGPL-3.0
PHP
- Krayin - CRM solution for SMEs and Enterprises for complete customer lifecycle management. (Demo, Source Code)
MIT
PHP
- Monica - Personal relationship manager, and a new kind of CRM to organize interactions with your friends and family. (Source Code)
AGPL-3.0
PHP/Docker
- SuiteCRM - The award-winning, enterprise-class open source CRM. (Source Code)
AGPL-3.0
PHP
- Twenty - A modern CRM offering the flexibility of open source, advanced features, and a sleek design. (Source Code)
AGPL-3.0
Docker
- Corteza - CRM including a unified workspace, enterprise messaging and a low code environment for rapidly and securely delivering records-based management solutions. (Demo, Source Code)
-
@ b0a838f2:34ed3f19
2025-05-23 17:52:51- Alfresco Community Edition - The open source Enterprise Content Management software that handles any type of content, allowing users to easily share and collaborate on content. (Source Code)
LGPL-3.0
Java
- Apostrophe - CMS with a focus on extensible in-context editing tools. (Demo, Source Code)
MIT
Nodejs
- Backdrop CMS - Comprehensive CMS for small to medium sized businesses and non-profits. (Source Code)
GPL-2.0
PHP
- BigTree CMS - Straightforward, well documented, and capable CMS. (Source Code)
LGPL-2.1
PHP
- Bludit
⚠
- Build a site or blog in seconds. Bludit uses flat-files (text files in JSON format) to store posts and pages. (Source Code)MIT
PHP
- CMS Made Simple - Faster and easier management of website contents, scalable for small businesses to large corporations. (Source Code)
GPL-2.0
PHP
- Cockpit - Simple content platform to manage any structured content. (Source Code)
MIT
PHP
- Concrete 5 CMS - Open source content management system. (Source Code)
MIT
PHP
- Contao - Powerful CMS that allows you to create professional websites and scalable web applications. (Demo, Source Code)
LGPL-3.0
PHP
- CouchCMS - CMS for designers. (Source Code)
CPAL-1.0
PHP
- Drupal - Advanced open source content management platform. (Source Code)
GPL-2.0
PHP
- eLabFTW - Online lab notebook for research labs. Store experiments, use a database to find reagents or protocols, use trusted timestamping to legally timestamp an experiment, export as pdf or zip archive, share with collaborators…. (Demo, Source Code)
AGPL-3.0
PHP
- Expressa - Content Management System for powering database driven websites using JSON schemas. Provides permission management and automatic REST APIs.
MIT
Nodejs
- Joomla! - Advanced Content Management System (CMS). (Source Code)
GPL-2.0
PHP
- KeystoneJS - CMS and web application platform. (Source Code)
MIT
Nodejs
- Localess
⚠
- Powerful translation management and content management system. Manage and translate your website or app content into multiple languages, using AI to translate faster. (Source Code)MIT
Docker
- MODX - Advanced content management and publishing platform. The current version is called 'Revolution'. (Source Code)
GPL-2.0
PHP
- Neos - Neos or TYPO3 Neos (for version 1) is a modern, open source CMS. (Source Code)
GPL-3.0
PHP
- Noosfero - Platform for social and solidarity economy networks with blog, e-Portfolios, CMS, RSS, thematic discussion, events agenda and collective intelligence for solidarity economy in the same system.
AGPL-3.0
Ruby
- Omeka - Create complex narratives and share rich collections, adhering to Dublin Core standards with Omeka on your server, designed for scholars, museums, libraries, archives, and enthusiasts. (Demo, Source Code)
GPL-3.0
PHP
- Payload CMS - Developer-first headless CMS and application framework. (Source Code)
MIT
Nodejs
- Pimcore - Multi-channel experience and engagement management platform. (Source Code)
GPL-3.0
PHP/Docker
- Plone - Powerful open-source CMS system. (Source Code)
ZPL-2.0
Python/Docker
- Publify - Simple but full featured web publishing software. (Source Code)
MIT
Ruby
- REDAXO - Simple, flexible and useful content management system (documentation only available in German). (Source Code)
MIT
PHP/Docker
- Roadiz - Modern CMS based on a node system which can handle many types of services. (Source Code)
MIT
PHP
- SilverStripe - Easy to use CMS with powerful MVC framework underlying. (Demo, Source Code)
BSD-3-Clause
PHP
- SPIP - Publication system for the Internet aimed at collaborative work, multilingual environments, and simplicity of use for web authors. (Source Code)
GPL-3.0
PHP
- Squidex - Headless CMS, based on MongoDB, CQRS and Event Sourcing. (Demo, Source Code)
MIT
.NET
- Strapi - The most advanced open-source Content Management Framework (headless-CMS) to build powerful API with no effort. (Source Code)
MIT
Nodejs
- Superdesk
⚠
- End-to-end news creation, production, curation, distribution, and publishing platform. (Source Code)AGPL-3.0
Docker/Python/PHP
- Textpattern - Flexible, elegant and easy-to-use CMS. (Demo, Source Code)
GPL-2.0
PHP
- Typemill - Author-friendly flat-file-cms with a visual markdown editor based on vue.js. (Source Code)
MIT
PHP
- TYPO3 - Powerful and advanced CMS with a large community. (Source Code)
GPL-2.0
PHP
- Umbraco - The friendly CMS. Free and open source with an amazing community. (Source Code)
MIT
.NET
- Vvveb CMS - Powerful and easy to use CMS to build websites, blogs or e-commerce stores. (Demo, Source Code)
AGPL-3.0
PHP/Docker
- Wagtail - Django content management system focused on flexibility and user experience. (Source Code)
BSD-3-Clause
Python
- WinterCMS - Speedy and secure content management system built on the Laravel PHP framework. (Source Code)
MIT
PHP
- WonderCMS - WonderCMS is the smallest flat file CMS since 2008. (Demo, Source Code)
MIT
PHP
- WordPress - World's most-used blogging and CMS engine. (Source Code)
GPL-2.0
PHP
- Alfresco Community Edition - The open source Enterprise Content Management software that handles any type of content, allowing users to easily share and collaborate on content. (Source Code)
-
@ 0176967e:1e6f471e
2024-07-25 20:38:11Čo vznikne keď spojíš hru SNAKE zo starej Nokie 3310 a Bitcoin? - hra Chain Duel!
Jedna z najlepších implementácií funkcionality Lightning Networku a gamingu vo svete Bitcoinu.
Vyskúšať si ju môžete s kamošmi na tomto odkaze. Na stránke nájdeš aj základné pravidlá hry avšak odporúčame pravidlá pochopiť aj priamo hraním
Chain Duel si získava hromady fanúšikov po bitcoinových konferenciách po celom svete a práve na Lunarpunk festival ho prinesieme tiež.
Multiplayer 1v1 hra, kde nejde o náhodu, ale skill, vás dostane. Poďte si zmerať sily s ďalšími bitcoinermi a vyhrať okrem samotných satoshi rôzne iné ceny.
Príďte sa zúčastniť prvého oficiálneho Chain Duel turnaja na Slovensku!
Pre účasť na turnaji je potrebná registrácia dopredu.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ 0176967e:1e6f471e
2024-07-22 19:57:47Co se nomádská rodina již 3 roky utíkající před kontrolou naučila o kontrole samotné? Co je to vlastně svoboda? Může koexistovat se strachem? S konfliktem? Zkusme na chvíli zapomenout na daně, policii a stát a pohlédnout na svobodu i mimo hranice společenských ideologií. Zkusme namísto hledání dalších odpovědí zjistit, zda se ještě někde neukrývají nové otázky. Možná to bude trochu ezo.
Karel provozuje již přes 3 roky se svou ženou, dvěmi dětmi a jedním psem minimalistický život v obytné dodávce. Na cestách spolu začali tvořit youtubový kanál "Karel od Martiny" o svobodě, nomádství, anarchii, rodičovství, drogách a dalších normálních věcech.
Nájdete ho aj na nostr.
-
@ b0a838f2:34ed3f19
2025-05-23 17:52:28- indico - Feature-rich event management system, made @ CERN, the place where the Web was born. (Demo, Source Code)
MIT
Python
- motion.tools (Antragsgrün) - Manage motions and amendments for (political) conventions. (Demo, Source Code)
AGPL-3.0
PHP/Docker
- OpenSlides - Presentation and assembly system for managing and projecting agenda, motions and elections of an assembly. (Demo, Source Code)
MIT
Docker
- osem - Event management tailored to free Software conferences. (Source Code)
MIT
Ruby/Docker
- pretalx - Web-based event management, including running a Call for Papers, reviewing submissions, and scheduling talks. Exports and imports for various related tools. (Source Code)
Apache-2.0
Python
- indico - Feature-rich event management system, made @ CERN, the place where the Web was born. (Demo, Source Code)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ 0176967e:1e6f471e
2024-07-21 15:48:56Lístky na festival Lunarpunku sú už v predaji na našom crowdfunding portáli. V predaji sú dva typy lístkov - štandardný vstup a špeciálny vstup spolu s workshopom oranžového leta.
Neváhajte a zabezpečte si lístok, čím skôr to urobíte, tým bude festival lepší.
Platiť môžete Bitcoinom - Lightningom aj on-chain. Vaša vstupenka je e-mail adresa (neposielame potvrdzujúce e-maily, ak platba prešla, ste in).
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A violência é uma forma de comunicação
A violência é uma forma de comunicação: um serial killer, um pai que bate no filho, uma briga de torcidas, uma sessão de tortura, uma guerra, um assassinato passional, uma briga de bar. Em todos esses se pode enxergar uma mensagem que está tentando ser transmitida, que não foi compreendida pelo outro lado, que não pôde ser expressa, e, quando o transmissor da mensagem sentiu que não podia ser totalmente compreendido em palavras, usou essa outra forma de comunicação.
Quando uma ofensa em um bar descamba para uma briga, por exemplo, o que há é claramente uma tentativa de uma ofensa maior ainda pelo lado do que iniciou a primeira, a briga não teria acontecido se ele a tivesse conseguido expressar em palavras tão claras que toda a audiência de bêbados compreendesse, o que estaria além dos limites da linguagem, naquele caso, o soco com o mão direita foi mais eficiente. Poderia ser também a defesa argumentativa: "eu não sou um covarde como você está dizendo" -- mas o bar não acreditaria nessa frase solta, a comunicação não teria obtido o sucesso desejado.
A explicação para o fato da redução da violência à medida em que houve progresso da civilização está na melhora da eficiência da comunicação humana: a escrita, o refinamento da expressão lingüística, o aumento do alcance da palavra falada com rádio, a televisão e a internet.
Se essa eficiência diminuir, porque não há mais acordo quanto ao significado das palavras, porque as pessoas não estão nem aí para se o que escrevem é bom ou não, ou porque são incapazes de compreender qualquer coisa, deve aumentar proporcionalmente a violência.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On HTLCs and arbiters
This is another attempt and conveying the same information that should be in Lightning and its fake HTLCs. It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
- Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
- Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
- MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
@ 0176967e:1e6f471e
2024-07-21 11:28:18Čo nám prinášajú exotické protokoly ako Nostr, Cashu alebo Reticulum? Šifrovanie, podpisovanie, peer to peer komunikáciu, nové spôsoby šírenia a odmeňovania obsahu.
Ukážeme si kúl appky, ako sa dajú jednotlivé siete prepájať a ako spolu súvisia.
-
@ 0176967e:1e6f471e
2024-07-21 11:24:21Podnikanie je jazyk s "crystal clear" pravidlami. Inštrumentalisti vidia podnikanie staticky, a toto videnie prenášajú na spoločnosť. Preto nás spoločnosť vníma často negatívne. Skutoční podnikatelia sú však "komunikátori".
Jozef Martiniak je zakladateľ AUSEKON - Institute of Austrian School of Economics
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Problemas com Russell Kirk
A idéia central da “política da prudência[^1]” de Russell Kirk me parece muito correta, embora tenha sido melhor formulada pior no seu enorme livro do que em uma pequena frase do joanadarquista Lucas Souza: “o conservadorismo é importante, porque tem muita gente com idéia errada por aí, e nós podemos não saber distingüi-las”.
Porém, há alguns problemas que precisam ser esclarecidos, ou melhor explicados, e que me impedem de enxergar os seus argumentos como refutação final do meu já tão humilde (embora feroz) anarquismo. São eles:
I Percebo alguma coisa errada, não sei bem onde, entre a afirmação de que toda ideologia é ruim, ou “todas as ideologias causam confusão[^2]”, e a proposta conservadora de “conservar o mundo da ordem que herdamos, ainda que em estado imperfeito, de nossos ancestrais[^3]”. Ora, sem precisar cair em exemplos como o do partido conservador inglês -- que conservava a política inglesa sempre onde estava, e se alternava no governo com o partido trabalhista, que a levava cada vez mais um pouco à esquerda --, está embutida nessa frase, talvez, a idéia, que ao mesmo tempo é clara e ferrenhamente combatida pelos próprios conservadores, de que a história é da humanidade é uma história de progresso linear rumo a uma situação melhor.
Querer conservar o mundo da ordem que herdamos significa conservar também os vários erros que podem ter sido cometidos pelos nossos ancestrais mais recentes, e conservá-los mesmo assim, acusando toda e qualquer tentativa de propôr soluções a esses erros de ideologia? Ou será que conservar o mundo da ordem é escolher um período determinado que seja tido como o auge da história humana e tentar restaurá-lo em nosso próprio tempo? Não seria isto ideologia?
Ou, ainda, será que conservar o mundo da ordem é selecionar, entre vários períodos do passado, alguns pedaços que o conservador considerar ótimos em cada sociedade, fazer dali uma mistura de sociedade ideal baseada no passado e então tentar implementá-la? Quem saberia dizer quais são as partes certas?
II Sobre a questão do que mantém a sociedade civil coesa, Russell Kirk, opondo-a à posição libertária de que o nexo da sociedade é o autointeresse, declara que a posição conservadora é a de que “a sociedade é uma comunidade de almas, que une os mortos, os vivos e os ainda não nascidos, e que se harmoniza por aquilo que Aristóteles chamou de amizade e os cristãos chamam de caridade ou amor ao próximo”.
Esta é uma posição muito correta, mas me parece estar em contradição com a defesa do Estado que ele faz na mesma página e na seguinte. O que me parece errado é que a sociedade não pode ser, ao mesmo tempo, uma “comunidade baseada no amor ao próximo” e uma comunidade que “requer não somente que as paixões dos indivíduos sejam subjugadas, mas que, mesmo no povo e no corpo social, bem como nos indivíduos, as inclinações dos homens, amiúde, devam ser frustradas, a vontade controlada e as paixões subjugadas” e, pior, que “isso somente pode ser feito por um poder exterior”.
Disto aí podemos tirar que, da mesma forma que Kirk define a posição libertária como sendo a de que o autointeresse é que mantém a sociedade civil coesa, a posição conservadora seria então a de que essa coesão vem apenas do Estado, e não de qualquer ligação entre vivos e mortos, ou do amor ao próximo. Já que, sem o Estado, diz, ele, citando Thomas Hobbes, a condição do homem é “solitária, pobre, sórdida, embrutecida e curta”?
[^1]: este é o nome do livro e também um outro nome que ele dá para o próprio conservadorismo (p.99). [^2]: p. 101 [^3]: p. 102
-
@ b0a838f2:34ed3f19
2025-05-23 17:52:10- ACP Admin - CSA administration. Manage members, subscriptions, deliveries, drop-off locations, member participation, invoices and emails (documentation in French). (Source Code)
MIT
Ruby
- E-Label - Solution for electronic labels, with QR Codes, on wine bottles sold within the European Union. (Source Code)
MIT
Docker
- FoodCoopShop - User-friendly software for food-coops. (Source Code)
AGPL-3.0
PHP/Docker
- Foodsoft - Manage a non-profit food coop (product catalog, ordering, accounting, job scheduling). (Source Code)
AGPL-3.0
Docker/Ruby
- juntagrico - Management platform for community gardens and vegetable cooperatives. (Source Code)
LGPL-3.0
Python
- Open Food Network - Online marketplace for local food. It enables a network of independent online food stores that connect farmers and food hubs with individuals and local businesses. (Source Code)
AGPL-3.0
Ruby
- OpenOlitor - Administration platform for Community Supported Agriculture groups. (Source Code)
AGPL-3.0
Scala
- teikei - A web application that maps out community-supported agriculture based on crowdsourced data. (Demo)
AGPL-3.0
Nodejs
- ACP Admin - CSA administration. Manage members, subscriptions, deliveries, drop-off locations, member participation, invoices and emails (documentation in French). (Source Code)
-
@ 0176967e:1e6f471e
2024-07-21 11:20:40Ako sa snažím praktizovať LunarPunk bez budovania opcionality "odchodom" do zahraničia. Nie každý je ochotný alebo schopný meniť "miesto", ako však v takom prípade minimalizovať interakciu so štátom? Nie návod, skôr postrehy z bežného života.
-
@ 0176967e:1e6f471e
2024-07-20 08:28:00Tento rok vás čaká workshop na tému "oranžové leto" s Jurajom Bednárom a Mariannou Sádeckou. Dozviete sa ako mení naše vnímanie skúsenosť s Bitcoinom, ako sa navigovať v dnešnom svete a odstrániť mentálnu hmlu spôsobenú fiat životom.
Na workshop je potrebný extra lístok (môžete si ho dokúpiť aj na mieste).
Pre viac informácií o oranžovom lete odporúčame pred workshopom vypočuťi si podcast na túto tému.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A command line utility to create and manage personal graphs, then write them to dot and make images with graphviz.
It manages a bunch of YAML files, one for each entity in the graph. Each file lists the incoming and outgoing links it has (could have listen only the outgoing, now that I'm tihnking about it).
Each run of the tool lets you select from existing nodes or add new ones to generate a single link type from one to one, one to many, many to one or many to many -- then updates the YAML files accordingly.
It also includes a command that generates graphs with graphviz, and it can accept a template file that lets you customize the
dot
that is generated and thus the graphviz graph.rel
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28GraphQL vs REST
Today I saw this: https://github.com/stickfigure/blog/wiki/How-to-(and-how-not-to)-design-REST-APIs
And it reminded me why GraphQL is so much better.
It has also reminded me why HTTP is so confusing and awful as a protocol, especially as a protocol for structured data APIs, with all its status codes and headers and bodies and querystrings and content-types -- but let's not talk about that for now.
People complain about GraphQL being great for frontend developers and bad for backend developers, but I don't know who are these people that apparently love reading guides like the one above of how to properly construct ad-hoc path routers, decide how to properly build the JSON, what to include and in which circumstance, what status codes and headers to use, all without having any idea of what the frontend or the API consumer will want to do with their data.
It is a much less stressful environment that one in which we can just actually perform the task and fit the data in a preexistent schema with types and a structure that we don't have to decide again and again while anticipating with very incomplete knowledge the usage of an extraneous person -- i.e., an environment with GraphQL, or something like GraphQL.
By the way, I know there are some people that say that these HTTP JSON APIs are not the real REST, but that is irrelevant for now.
-
@ b0a838f2:34ed3f19
2025-05-23 17:51:53- Converse.js - XMPP chat client in your browser. (Source Code)
MPL-2.0
Javascript
- JSXC - Real-time XMPP web chat application with video calls, file transfer and encrypted communication. There are also versions for Nextcloud/Owncloud and SOGo. (Source Code)
MIT
Javascript
- Libervia - Web frontend from Salut à Toi.
AGPL-3.0
Python
- Salut à Toi - Multipurpose, multi frontend, libre and decentralized communication tool. (Source Code)
AGPL-3.0
Python
- Converse.js - XMPP chat client in your browser. (Source Code)
-
@ 7361ca91:252fce6d
2024-07-02 14:33:05サイエンス・フィクションは、可能な未来を想像する思索的な演習です。サイファイは可能性の空間を拡張します。暗号通貨は極端な種類のサイファイであり、未来のビジョンを提供するだけでなく、その未来を実現するためのツールも提供します。暗号通貨は現在、ソーラーパンクと呼ばれるサイファイのジャンルによって活気づけられています。サイバーパンクから進化したソーラーパンクは、楽観主義によって特徴づけられる未来のユートピア的なビジョンです。ソーラーパンクにとって、未来は明るいのです。ソーラーパンクはサイバーパンクのディストピアの影を払いのけ、混沌とした地平線を越えた世界を照らし出します。多くの人気のあるDeFiチェーンでは、ソーラーパンクのハッカーが公共財を資金調達するための透明なインフラを構築しています。共有された信念はシンプルです。分散型で透明な金融システムへの公共アクセスが、より公正で正義のある世界をもたらすということです。ソーラーパンクは暗号通貨の意識的な心です。明るく、自信に満ち、未来志向です。
しかし、ソーラーパンクの信念に対する反対はルナーパンクの懐疑主義です。ルナーパンクはソーラーの影の自分たちです。彼らはこのサイクルの無意識です。ソーラーパンクがDAOに参加する一方で、ルナーパンクは戦争の準備をし、コミュニティを守るためのプライバシー強化ツールを構築します。ルナーパンクは最初、ソーラーパンクのサブセットとして登場しました。常にイーサリアムや類似のチェーンで提供されるプレーンテキストのパラダイムよりも暗号化を好んでいました。時間が経つにつれて、ソーラーパンクの傾向によって生じた緊張はますます増大しました。ルナーパンクはソーラーパンクの遺産から離れ、自分自身のアイデンティティを主張するようになりました。
ルナーパンクの空想では、暗号通貨と既存の権力構造との間の対立は基本的にプログラムされています。規制によって暗号通貨は地下に追いやられ、匿名性が増大します。ルナーパンクのビジョンは、ベアリッシュな悪夢として拒絶されます。その根本的な対立――国家が暗号通貨を禁止すること――は、人々が金を持って逃げるような恐怖を生むため、ソーラーパンクによって却下されます。ソーラーパンクの楽観主義はブルマーケットのサイクルと同義になり、悲観主義はベアと関連付けられています。ルナーパンクはこの単純なエスカレーションを超える何かを提供します。それは市場サイクルの間の洞察の瞬間、ホログラムの中のグリッチであり、ソースコードが輝いて見える瞬間です。
ソーラーパンクは脆弱です――揺れ動くと壊れるものです。アンチフラジャイルとは、ショックを吸収し強くなるものです。次のことを考慮してください。暗号通貨の核心的なイノベーションは適応的です。ユーザーをエンパワーメントしつつ、その攻撃面を等しく拡散させます。ユーザーのエンパワーメントは脆弱性と負の相関があります。ユーザーベースが重要であればあるほど、ネットワークはよりアンチフラジャイルになります。ユーザーのエンパワーメントとシステムのアンチフラジャイルは互いにポジティブなフィードバックを形成します。
しかし、このサイクルは逆方向にも進行します。透明なシステムでは、ユーザーはさらされます。外部環境が敵対的になると、この情報が彼らに対して武器として使用される可能性があります。迫害に直面したユーザーは脱退を選び、それが脆弱性への下降を引き起こします。ソーラーパンクの考え方は本質的に楽観的です。ソーラーパンクシステムの透明性は、外向きに投影された楽観主義の精神です。透明なシステムを構築することで、ソーラーパンクは「法律が自分に不利にならないと信じている」と言っています。楽観主義の強調は、最悪の事態に備えることを妨げます。これがソーラーパンクの脆弱性の核心です。
暗い選択肢とアンチフラジャイル性は未知性に依存しています。未来は暗く、意味のある確実性を持って予測することはできません。選択肢は、その暗闇を利用して有利に働くため、アンチフラジャイル性の武器と呼ばれています。選択肢は、予言がほとんどの場合間違っていると仮定します。間違っていることは安価ですが、正しいことは不釣り合いに報われます。ルナーパンクは最悪の事態において成功するため、選択肢を取り入れています。ルナーパンクの仮説が間違っていれば、スーパーサイクルは続きます。正しければ、暗号通貨は適切な防御を備えた次の段階に進みます。しっかりと防御されることは、暗号化を使用してユーザーの身元と活動を保護することを意味します。
予言
匿名性は、まず大量監視への適応として生じます。しかし、その存在自体が監視努力をさらに正当化します。これは、匿名性と監視がエスカレートする運命にあることを示唆する正のフィードバックループです。このループが十分に続くと、ルナーパンクの予言の次の段階が引き起こされます:規制トラップ。この段階では、政府は匿名性の増加を口実にして、暗号通貨に対してその力を最大限に活用します。しかし、暗号通貨を取り締まることで、敵対的な力はその正当化をさらに強化するだけです。暗号通貨の実用性は、取り締まりの程度と相関します。各打撃を受けるたびに、不釣り合いに拡大します。
ルナサイクル
太陽はその透明性とアイデンティティへのこだわりを通じて、自然の象徴であり暴政の象徴でもあります。ソーラーパンクはその中央の象徴の二重の特徴を受け継いでいます。ソーラーパンクシステムは、ユーザーが危険にさらされる砂漠の風景です。ルナーパンクは森のようなものです。暗号化の密集した覆いが部族を保護し、迫害された人々に避難所を提供します。木立は重要な防衛線を提供します。ルナの風景は暗いですが、生物にあふれています。ルナテックは、自由のために人々自身によって所有され、運営されています。ルナサイクルは、権威主義的な技術に対して民主的な技術を支持します:監視に対する自由と、単一文化に対する多様性です。システムに透明性を重視することで、ソーラーパンクは悲劇的にその運命を作り上げています。監視、すなわち権威主義のメカニズムは、ソーラーパンクの運命に結びついています。ソーラーパンクが成功するためには、ルナーパンクの無意識を統合する必要があります。ソーラーパンクが成功する唯一の希望は、暗闇に移行することです。
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ b0a838f2:34ed3f19
2025-05-23 17:51:36- ejabberd - XMPP instant messaging server. (Source Code)
GPL-2.0
Erlang/Docker
- MongooseIM - Mobile messaging platform with a focus on performance and scalability. (Source Code)
GPL-2.0
Erlang/Docker/K8S
- Openfire - Real time collaboration (RTC) server. (Source Code)
Apache-2.0
Java
- Prosody IM - Feature-rich and easy to configure XMPP server. (Source Code)
MIT
Lua
- Snikket - All-in-one Dockerized easy XMPP solution, including web admin and clients. (Source Code, Clients)
Apache-2.0
Docker
- Tigase - XMPP server implementation in Java. (Source Code)
GPL-3.0
Java
- ejabberd - XMPP instant messaging server. (Source Code)
-
@ b0a838f2:34ed3f19
2025-05-23 17:51:16- BigBlueButton - Supports real-time sharing of audio, video, slides (with whiteboard controls), chat, and the screen. Instructors can engage remote students with polling, emojis, and breakout rooms. (Source Code)
LGPL-3.0
Java
- Galene - Video conferencing server that is easy to deploy and that requires moderate server resources. (Source Code)
MIT
Go
- Janus - General-purpose, lightweight, minimalist WebRTC Server. (Demo, Source Code)
GPL-3.0
C
- Jitsi Meet - WebRTC application that uses Jitsi Videobridge to provide high quality, scalable video conferences. (Demo, Source Code)
Apache-2.0
Nodejs/Docker/deb
- Jitsi Video Bridge - WebRTC compatible Selective Forwarding Unit (SFU) that allows for multiuser video communication. (Source Code)
Apache-2.0
Java/deb
- MiroTalk C2C - Real-time cam-2-cam video calls & screen sharing, end-to-end encrypted, to embed in any website with a simple iframe. (Source Code)
AGPL-3.0
Nodejs/Docker
- MiroTalk P2P - Simple, secure, fast real-time video conferences up to 4k and 60fps, compatible with all browsers and platforms. (Demo, Source Code)
AGPL-3.0
Nodejs/Docker
- MiroTalk SFU - Simple, secure, scalable real-time video conferences up to 4k, compatible with all browsers and platforms. (Demo, Source Code)
AGPL-3.0
Nodejs/Docker
- plugNmeet - Scalable and high performance web conferencing system. (Demo, Source Code)
MIT
Docker/Go
- BigBlueButton - Supports real-time sharing of audio, video, slides (with whiteboard controls), chat, and the screen. Instructors can engage remote students with polling, emojis, and breakout rooms. (Source Code)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28LessPass remoteStorage
LessPass is a nice idea: a password manager without any state. Just remember one master password and you can generate a different one for every site using the power of hashes.
But it has a very bad issue: some sites require just numbers, others have a minimum or maximum character limits, some require non-letter characters, uppercase characters, others forbid these and so on.
The solution: to allow you to specify parameters when generating the password so you can fit a generated password on every service.
The problem with the solution: it creates state. Now you must remember what parameters you used when generating a password for each site.
This was a way to store these settings on a remoteStorage bucket. Since it isn't confidential information in any way, that wasn't a problem, and I thought it was a good fit for remoteStorage.
Some time later I realized it maybe would be better to have a centralized repository hosting all weird requirements for passwords each domain forced on its users, and let LessPass use data from that central place when generating a password. Still stateful, not ideal, not very far from a centralized password manager, but still requiring less trust and less cryptographic assumptions.
- https://github.com/fiatjaf/lesspass-remotestorage
- https://addons.mozilla.org/firefox/addon/lesspass-remotestorage/
- https://chrome.google.com/webstore/detail/lesspass-remotestorage/aogdpopejodechblppdkpiimchbmdcmc
- https://lesspass.alhur.es/
See also
-
@ 7361ca91:252fce6d
2024-06-29 10:13:51リベラリズムにおいて、自由とは他人が不適切と感じるかもしれない決定を個人が行うことを許容することを意味します。ただし、その決定が第三者に害を及ぼさない限りです。つまり、自由は各個人が自分の人生計画を構築し、実行する権利を守るものであり、他人にとって誤っていると見なされる決定も含まれます。
個人の決定を批判することと、その決定を妨げるために強制力を使用することの間には大きな違いがあります。
リベラリズムは、たとえ悪い決定を批判することができても、それらの決定を妨げるために(国家の)強制力を使用すべきではないと主張します。ただし、これらの決定が他人に害を及ぼす場合は例外です。
リベラリズムにおける認識的慎重さは、どの決定が正しいか常に確信できるわけではないことを認識しています。したがって、他人に誤っているように見える場合でも、個人が自分の決定を下す自由を許容しなければなりません。
非常に一般的な例として、薬物の使用が挙げれます。リベラルな視点からは、他人がこれを悪い決定と見なすかもしれなくても、個人が薬物を使用する自由を持つべきです。鍵となるのは、これらの個人的な決定が第三者に害を及ぼさないことです。
リベラルな哲学においては、個人の自由と自己決定が重要視されます。他人が悪いと考えるかもしれない決定を行う権利も尊重されるべきであり、その決定が第三者に害を及ぼさない限り、(国家が)強制力を使用して妨げるべきではないという立場をとります。この認識的慎重さが、リベラリズムの核心の一つです。
-
@ b0a838f2:34ed3f19
2025-05-23 17:50:59- Akkoma - Federated microblogging server with Mastodon, GNU social, and ActivityPub compatibility. (Source Code)
AGPL-3.0
Elixir/Docker
- Answer - Knowledge-based community software. You can use it to quickly build your Q&A community for product technical support, customer support, user communication, and more. (Source Code)
Apache-2.0
Docker/Go
- Artalk - Comment system built in Golang, providing a lightweight and highly customizable solution for adding comments to your website. (Source Code)
MIT
Go/Docker
- AsmBB - Fast, SQLite-powered forum engine written in ASM. (Source Code)
EUPL-1.2
Assembly
- BuddyPress - Powerful plugin that takes your WordPress.org powered site beyond the blog with social-network features like user profiles, activity streams, user groups, and more. (Source Code)
GPL-2.0
PHP
- Chirpy - Privacy-friendly and customizable Disqus (comment system) alternate. (Demo, Source Code)
AGPL-3.0
Docker/Nodejs
- Coral - A better commenting experience from Vox Media. (Source Code)
Apache-2.0
Docker/Nodejs
- diaspora* - Distributed social networking server. (Source Code)
AGPL-3.0
Ruby
- Discourse - Advanced forum / community solution based on Ruby and JS. (Demo, Source Code)
GPL-2.0
Docker
- Elgg - Powerful open source social networking engine. (Source Code)
GPL-2.0
PHP
- Enigma 1/2 BBS - Enigma 1/2 is a modern, multi-platform BBS engine with unlimited "callers" and legacy DOS door game support. (Source Code)
BSD-2-Clause
Shell/Docker/Nodejs
- Flarum - Delightfully simple forums. Flarum is the next-generation forum software that makes online discussion fun again. (Source Code)
MIT
PHP
- Friendica - Social Communication Server. (Source Code)
AGPL-3.0
PHP
- GoToSocial - ActivityPub federated social network server implementing the Mastodon client API. (Source Code)
AGPL-3.0
Docker/Go
- Hatsu - Bridge that interacts with Fediverse on behalf of your static site. (Source Code)
AGPL-3.0
Docker/Rust
- Hubzilla - Decentralized identity, privacy, publishing, sharing, cloud storage, and communications/social platform. (Source Code)
MIT
PHP
- HumHub - Flexible kit for private social networks. (Source Code)
AGPL-3.0
PHP
- Isso - Lightweight commenting server written in Python and Javascript. It aims to be a drop-in replacement for Disqus. (Source Code)
MIT
Python/Docker
- Lemmy - Link aggregator for the fediverse (alternative to Reddit). (Source Code)
AGPL-3.0
Docker/Rust
- Loomio - Collaborative decision-making tool that makes it easy for anyone to participate in decisions which affect them. (Source Code)
AGPL-3.0
Docker
- Mastodon - Federated microblogging server. (Source Code, Clients)
AGPL-3.0
Ruby
- Misago - Fully featured modern forum application that is fast, scalable and responsive. (Source Code)
GPL-2.0
Docker
- Misskey - Decentralized app-like microblogging server/SNS for the Fediverse, using the ActivityPub protocol like GNU social and Mastodon. (Source Code)
AGPL-3.0
Nodejs/Docker
- Movim - Modern, federated social network based on XMPP, with a fully featured group-chat, subscriptions and microblogging. (Source Code)
AGPL-3.0
PHP/Docker
- MyBB - Free, extensible forum software package. (Source Code)
LGPL-3.0
PHP
- NodeBB - Forum software built for the modern web. (Demo, Source Code)
GPL-3.0
Nodejs/Docker
- OSSN - Social networking software that allows you to make a social networking website and helps your members build social relationships, with people who share similar professional or personal interests. (Source Code)
CAL-1.0
PHP
- phpBB - Flat-forum bulletin board software solution that can be used to stay in touch with a group of people or can power your entire website. (Source Code)
GPL-2.0
PHP
- PixelFed - Ethical photo sharing platform, powered by ActivityPub federation (alternative to Instagram). (Source Code)
AGPL-3.0
PHP
- Pleroma - Federated microblogging server, Mastodon, GNU social, & ActivityPub compatible. (Source Code)
AGPL-3.0
Elixir
- qpixel - Q&A-based community knowledge-sharing software. (Source Code)
AGPL-3.0
Ruby
- Redlib
⚠
- An alternative private front-end to Reddit, with its origins in Libreddit.AGPL-3.0
Rust
- remark42 - A lightweight and simple comment engine, which doesn't spy on users. It can be embedded into blogs, articles or any other place where readers add comments. (Demo, Source Code)
MIT
Docker/Go
- Retrospring - A free, open-source social network following the Q/A (question and answer) principle of sites like Formspring, ask.fm or CuriousCat. (Demo)
AGPL-3.0
Ruby/Nodejs
- Scoold - Stack Overflow in a JAR. An enterprise-ready Q&A platform with full-text search, SAML, LDAP integration and social login support. (Demo, Source Code)
Apache-2.0
Java/Docker/K8S
- Simple Machines Forum - Free, professional grade software package that allows you to set up your own online community within minutes. (Source Code)
BSD-3-Clause
PHP
- Socialhome - Federated and decentralized profile builder and social network engine. (Demo, Source Code)
AGPL-3.0
Docker/Python
- Takahē - Federated microblogging server. Mastodon, & ActivityPub compatible. (Source Code)
BSD-3-Clause
Docker
- Talkyard - Create a community, where your users can suggest ideas and get questions answered. And have friendly open-ended discussions and chat (Slack/StackOverflow/Discourse/Reddit/Disqus hybrid). (Demo, Source Code)
AGPL-3.0
Docker/Scala
- yarn.social - Self-Hosted, Twitter™-like Decentralised micro-logging platform. No ads, no tracking, your content, your data. (Source Code)
MIT
Go
- Zusam - Free and open-source way to self-host private forums for groups of friends or family. (Demo)
AGPL-3.0
PHP
- Akkoma - Federated microblogging server with Mastodon, GNU social, and ActivityPub compatibility. (Source Code)
-
@ 862fda7e:02a8268b
2024-06-29 08:18:20I am someone who thinks independently without abiding to a group to pre-formulate my opinions for me. I do not hold my opinions out of impulse, out of the desire to please, nor out of mindless apadtion to what others abide to. My opinions are held on what I belive is the most logical while being the most ethical and empathetic. We live in a world with a nervous systems and emotions for animals and humans (same thing) alike, thus, we should also consider those feelings. That is not the case in our world.
Cyclists are one of the most homosexual GAY ANNOYING people to exist on EARTH
I hate cyclists with a burning passion. These faggots are the GAYEST MOST ANNOYING retards to exist. They wear the tightest fitting clothing possible to show off their flaccid cocks to each other and to anybody around them.
And if that weren't enough, they present their ass up in the air, begging to be fucked by their cyclist buddies, as they ride their bike in the middle of the road. It's homosexual.
Look at the seat they ride on, it looks like a black cock about to go up their ass. Don't get me started on their gay helmets, the "aerodynamic" helmets they wear. YOU FAGGOTS AREN'T IN THE TOUR DE FRANCE, YOU'RE IN FRONT OF MY CAR IN AN INTERSECTION IN A MINNESOTA TOWN WITH A POPULATION OF 5,000. They LIKE the look of the costume. And that's just what it is - a costume. You're required to have a "look" as a cyclist - you aren't really a cyclist if you don't spend hundreds of dollars on the gay gear. God forbid you just get a bike and ride it around on a trail like anyone else. These people LIKE to be seen. They WANT to be seen as cool, which is why they ride right in front of my fucking car at 15mph in a 45mph zone. I swear, every time I pass one of these cyclists, I am this close to yelling "FAGGOT" out the window at them. The only reason I haven't is because they like to record everything on their gay bikes and upload it to Youtube, so then I'd have to deal with you people knowing where I live just because I called some fruit on a bike a faggot. Think I'm exaggerating? Think again. These homos have an entire community built on "catching" drivers who dare drive too close or blow their exhaust at the poor little faggy cyclist. There's Youtube channels dedicated to this. Part of their culture is being a victim by cars. Almost like it's dangerous to be in the middle of the road going 15mph on a 45mph road. Oh but I'm sure cars almost hitting you is surely a personal attack and nothing to do with the fact that what you're doing is DANGEROUS YOU RETARD.
I've seen these "share the road" signs in the most insane and dangerous places. I've seen them on HIGHWAYS, yes, HIGHWAYS, where the cyclist would BARELY have any room next to the car. It's insanely dangerous and I guess to some people, the constant threat of dying is fun... until it actually happens. I will never understand the mind of a cyclist. You are not in the fricken' Tour de France. You look like a homosexual that's inconveniencing HEAVY METAL HIGH POWERED CARS RIGHT BEHIND YOU. It's incredibly dangerous, and you can't rely on the very heavy, high powered cars and the people driving them to honor your life. Road rage is real, you might be the tipping point for some angry old boomer in his Ram to RAM INTO YOU and kill you. God I hate cyclists, their gay look, their cocky "better than you" attitude. Hey fudge stripe, in a battle between my CAR and your soft body, my CAR WILL WIN. Get off the road and go suck some cocks instead. Stop riding the bikes and go ride cocks instead, you homo.
-
@ b0a838f2:34ed3f19
2025-05-23 17:50:31- Asterisk - Easy to use but advanced IP PBX system, VoIP gateway and conference server. (Source Code)
GPL-2.0
C/deb
- Eqivo - Eqivo implements an API layer on top of FreeSWITCH facilitating integration between web applications and voice/video-enabled endpoints such as traditional phone lines (PSTN), VoIP phones, webRTC clients etc. (Source Code)
MIT
Docker/PHP
- Flexisip - Complete, modular and scalable SIP server, includes a push gateway, to deliver SIP incoming calls or text messages on mobile device platforms where push notifications are required to receive information when the app is not active in the foreground. (Source Code)
AGPL-3.0
C/Docker
- Freepbx - Web-based open source GUI that controls and manages Asterisk. (Source Code)
GPL-2.0
PHP
- FreeSWITCH - Scalable open source cross-platform telephony platform. (Source Code)
MPL-2.0
C
- FusionPBX - Web interface for multi-platform voice switch called FreeSWITCH. (Source Code)
MPL-1.1
PHP
- Kamailio - Modular SIP server (registrar/proxy/router/etc). (Source Code)
GPL-2.0
C/deb
- openSIPS - SIP proxy/server for voice, video, IM, presence and any other SIP extensions. (Source Code)
GPL-2.0
C
- Routr - A lightweight sip proxy, location server, and registrar for a reliable and scalable SIP infrastructure. (Source Code)
MIT
Docker/K8S
- SIP3 - VoIP troubleshooting and monitoring platform. (Demo, Source Code)
Apache-2.0
Java
- SIPCAPTURE Homer - Troubleshooting and monitoring VoIP calls. (Source Code)
AGPL-3.0
Nodejs/Go/Docker
- Wazo - Full-featured IPBX solution built atop Asterisk with integrated Web administration interface and REST-ful API. (Source Code)
GPL-3.0
Python
- Yeti-Switch - Transit class4 softswitch(SBC) with integrated billing and routing engine and REST API. (Demo, Source Code)
GPL-2.0
C++/Ruby
- Asterisk - Easy to use but advanced IP PBX system, VoIP gateway and conference server. (Source Code)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ b0a838f2:34ed3f19
2025-05-23 17:50:11- Convos - Always online web IRC client. (Demo, Source Code)
Artistic-2.0
Perl/Docker
- Ergo - Modern IRCv3 server written in Go, combining the features of an ircd, a services framework, and a bouncer. (Source Code)
MIT
Go/Docker
- Glowing Bear - A web frontend for WeeChat. (Demo)
GPL-3.0
Nodejs
- InspIRCd - Modular IRC server written in C++ for Linux, BSD, Windows, and macOS. (Source Code)
GPL-2.0
C++/Docker
- Kiwi IRC - Responsive web IRC client with theming support. (Demo, Source Code)
Apache-2.0
Nodejs
- ngircd - Portable and lightweight Internet Relay Chat server for small or private networks. (Source Code)
GPL-2.0
C/deb
- Quassel IRC - Distributed IRC client, meaning that one (or multiple) client(s) can attach to and detach from a central core. (Source Code)
GPL-2.0
C++
- Robust IRC - RobustIRC is IRC without netsplits. Distributed IRC server, based on RobustSession protocol. (Source Code)
BSD-3-Clause
Go
- The Lounge - Self-hosted web IRC client. (Demo, Source Code)
MIT
Nodejs/Docker
- UnrealIRCd - Modular, advanced and highly configurable IRC server written in C for Linux, BSD, Windows, and macOS. (Source Code)
GPL-2.0
C
- Weechat - Fast, light and extensible chat client. (Source Code)
GPL-3.0
C/Docker/deb
- ZNC - Advanced IRC bouncer. (Source Code)
Apache-2.0
C++/deb
- Convos - Always online web IRC client. (Demo, Source Code)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A estrutura lógica do livro didático
Todos os livros didáticos e cursos expõem seus conteúdos a partir de uma organização lógica prévia, um esquema de todo o conteúdo que julgam relevante, tudo muito organizadinho em tópicos e subtópicos segundo a ordem lógica que mais se aproxima da ordem natural das coisas. Imagine um sumário de um manual ou livro didático.
A minha experiência é a de que esse método serve muito bem para ninguém entender nada. A organização lógica perfeita de um campo de conhecimento é o resultado final de um estudo, não o seu início. As pessoas que escrevem esses manuais e dão esses cursos, mesmo quando sabem do que estão falando (um acontecimento aparentemente raro), o fazem a partir do seu próprio ponto de vista, atingido após uma vida de dedicação ao assunto (ou então copiando outros manuais e livros didáticos, o que eu chutaria que é o método mais comum).
Para o neófito, a melhor maneira de entender algo é através de imersões em micro-tópicos, sem muita noção da posição daquele tópico na hierarquia geral da ciência.
- Revista Educativa, um exemplo de como não ensinar nada às crianças.
- Zettelkasten, a ordem surgindo do caos, ao invés de temas se encaixando numa ordem preexistentes.
-
@ 862fda7e:02a8268b
2024-06-29 08:16:47Pictured above is a female body that looks incredibly similar to mine. So similar that it could be mine if I didn't have a c-section scar. Actually her boobs are a bit bigger than mine, but mine do look like that when they swell.
Nudity should be normalized
I don't understand why people get offended or sensitive over the human body. Especially the female human chest. We all have nipples, we all know what they look like, so I don't understand why a female being topless is unacceptable, whereas a male topless is normal. I've seen shirtless men with more shapely, larger breasts than mine. Personally, I have stopped caring if other people see the shape of my breasts or my nipples if they were to be poking in the freezer section of the grocery store. I stopped wearing bras years ago. I often times want to be topless outside in my yard because it feels good and natural. I like the sun on my skin, and I especially love the rain on my skin. It's very unfortunate that our natural human body is massively shamed, by who I believe, is the reptilians (for a number of reasons). Logically, it makes no sense to be outraged by the female human chest. We all know what nipples look like, we all have them, I truly believe women should be allowed to be topless like men are. If it's hot out and you're doing yard work - pop your top off, who cares?
I understand that males and some females sexualize the human chest. However, that is not my problem. That is none of my concern what others think about my body. I should be allowed to wear my human body as it is just like anyone else should be. What I can't wrap my mind around is why people are shocked or offended by the human body, since we ALL know what these parts look like and we all have them. I understand in certain scenarios being topless or nude would likely be inapproriate, or that perverts would use it as a way to expose themselves to children. In an ideal world, we could live like tribes where the human body is normal, it's not overtly sexual. This is why we're so offended over the human body - it's constantly concealed, so the moment we get to see a female chest, it's suddenly sexual because it's normally tabboo to be seen. I wish I could be shirtless outside, I envy males who get to truly feel the wind, the earth on their backs and their chest. Female and male nipples look the same, I don't understand why it should be illegal for me to experience nature in my natural state.
Anyways, I highly dislike the "nudist" people because it is NOT about accepting the human body in its natural state. It's completely co-opted by pedophiles who want to expose themselves to childen or for children to expose themselves to others for sexual gratification. There are nudist resorts pedophile parents force their children to go to (as a child you have no personal autonomy and are completely a slave to your parents - trust me, I know this because I couldn't LEGALLY decide which parent I wanted to live with up until I was 18 years old. If your parent wants you to do something, a child in the US has no legal say over that, so if your parent wants to go to a nudist resort, you must go). A human body should simply be a human body, it's unfortunate that being unclothed immediately brings on sexualization. This is mostly an issue because clothes is the expected default. The more tabboo something is, the more naughty the thing is.
I am not a nudist. However, I do believe that at the very least, females should have the right to be topless in similar settings as males are allowed to. I don't think a woman is a slut if she's in her natural state, in her human body, and proceeds life as normal. How one acts portrays slutty behavior. Living your life in your natural human body should be a right without caviots. I feel detached from people who constantly see the human body as flawed (e.g. circumcision industry, body hair removal industry, clothing industry). These industries are harmful for the victims in them (infant boys, and modern day slaves in sweatshops), and the main motivating factor is money among all these industries.
-
@ b0a838f2:34ed3f19
2025-05-23 17:49:53- Cypht - Feed reader for your email accounts. (Source Code)
LGPL-2.1
PHP
- Roundcube - Browser-based IMAP client with an application-like user interface. (Source Code)
GPL-3.0
PHP/deb
- SnappyMail - Simple, modern, lightweight & fast web-based email client (fork of RainLoop). (Demo, Source Code)
AGPL-3.0
PHP
- SquirrelMail - Another browser-based IMAP client. (Source Code)
GPL-2.0
PHP
- Cypht - Feed reader for your email accounts. (Source Code)
-
@ 6ad3e2a3:c90b7740
2024-06-28 07:44:17I’ve written about this before, and I’ve also tried to illustrate it via parable in “The Simulator.” I even devoted most of a podcast episode to it. But I thought it might be useful just to cut to the chase in written form, give the tldr version.
Utilitarianism is the moral philosophy of the greatest good for the greatest number. That is, we calculate the prospective net benefits or harms from a course of action to guide our policies and behaviors. This moral framework deserves scrutiny because it (unfortunately) appears to be the paradigm under which most of our governments, institutions and even educated individuals operate.
The philosophy is commonly illustrated by hypotheticals such as “the trolley problem” wherein a person has the choice whether to divert a runaway trolley (via switch) from its current track with five people in its path to it to an alternate one where only one person would be killed. In short, do you intervene to save five by killing one? The premise of utilitarianism is that, yes, you would because it’s a net positive.
Keep in mind in these hypotheticals these are your only choices, and not only are you certain about the results of your prospective actions, but those are the only results to consider, i.e., the hypothetical does not specify or speculate as to second or third order effects far into the future. For example, what could go wrong once people get comfortable intervening to kill an innocent person for the greater good!
And yet this simplified model, kill one to save five, is then transposed onto real-world scenarios, e.g., mandate an emergency-use vaccine that might have rare side effects because it will save more lives than it costs. The credulous go along, and then reality turns out to be more complex than the hypothetical — as it always is.
The vaccine doesn’t actually stop the spread, it turns out, and the adverse effects are far more common and serious than people were led to believe. Moreover, the mandates destroyed livelihoods, ran roughshod over civil liberties, divided families and destroyed trust in public health.
Some might argue it’s true the mandates were wrong because the vaccine wasn’t sufficiently safe or effective, but we didn’t know that at the time! That we should judge decisions based on the knowledge we had during a deadly pandemic, not after the fact.
But what we did know at the time was that real life is always more complex than the hypothetical, that nth-order effects are always impossible to predict. The lie wasn’t only about the safety and efficacy of this vaccine, though they absolutely did lie about that, it was that we could know something like this with any degree of certainty at all.
The problem with utilitarianism, then, is the harms and benefits it’s tasked with weighing necessarily occur in the future. And what distinguishes the future from the past is that it’s unknown. The save one vs five hypothetical is deceptive because it posits all the relevant consequences as known. It imagines future results laid out before us as if they were past.
But in real life we deal with unknowns, and it is therefore impossible do a rigorous accounting of harms and benefits as imagined by the hypotheticals. The “five vs one,” stated as though the event had already happened, is a conceit conjured from an overly simplistic model.
In short, in modeling the future and weighing those projected fictions the same way one would historic facts, they are simply making it up. And once he grants himself license to make things up, the utilitarian can create the moral imperative to do what coincidentally benefits him, e.g, Pfizer profited to the tune of tens of billions of dollars, for the greater good!, which turned out not to be so good. Once you make the specious leap of purporting to know the future, why not go all the way, and make it a future where the greatest good coincides with enriching yourself to the greatest extent possible?
So in summary, as this has gone on longer than I intended, utilitarianism is moral bankruptcy because the “greater good” on which it relies is necessarily in the future, and we cannot predict the future with enough accuracy, especially over the medium and longer term, to do a proper moral accounting.
As a result, whoever has power is likely to cook the books in whatever way he sees fit, and this moral philosophy of the greatest good for the greatest number paradoxically tends toward a monstrous outcome — temporary benefits for the short-sighted few and the greatest misery for most.
-
@ b0a838f2:34ed3f19
2025-05-23 17:49:32- HyperKitty - Access GNU Mailman v3 archives. (Demo, Source Code)
GPL-3.0
Python
- Keila - Reliable and easy-to-use newsletter tool (alternative to Mailchimp or Sendinblue). (Demo, Source Code)
AGPL-3.0
Docker
- Listmonk - High performance, self-hosted newsletter and mailing list manager with a modern dashboard. (Source Code)
AGPL-3.0
Go/Docker
- Mailman - Manage electronic mail discussion and e-newsletter lists. (Source Code)
GPL-3.0
Python
- Mautic - Marketing automation software (email, social and more). (Source Code)
GPL-3.0
PHP
- phpList - Newsletter and email marketing with advanced management of subscribers, bounces, and plugins. (Source Code)
AGPL-3.0
PHP
- Postorius - Web user interface to access GNU Mailman. (Source Code)
GPL-3.0
Python
- Schleuder - GPG-enabled mailing list manager with resending-capabilities. (Source Code)
GPL-3.0
Ruby
- Sympa - Mailing list manager. (Source Code)
GPL-2.0
Perl
- HyperKitty - Access GNU Mailman v3 archives. (Demo, Source Code)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Zettelkasten
https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125 (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal
neuron
que também gera sites a partir das notas?, acho que vai ser bom. -
@ 0d2a0f56:ef40df51
2024-06-25 17:16:44Dear Bitcoin Blok Family,
In my last post, I shared how Bitcoin became my personal antidote, rescuing me from destructive behaviors and opening up a world of knowledge. Today, I want to explore how Bitcoin is becoming an antidote for my entire family, solving problems I didn't even know we had.
As I've delved deeper into the Bitcoin rabbit hole, I've realized that I was searching for something beyond personal growth – I was looking for a solution to give my family an identity. When you look at successful families of the past or thriving organizations, you'll notice they're known for something specific. Some families are known for real estate, others for owning businesses, and some for establishing various institutions. This identity ties a family together, giving them a shared purpose and legacy.
For my family, I want that identity to be rooted in Bitcoin. Why? Because Bitcoin brings a focus I haven't seen in anything else. It neutralizes much of what I consider toxic behavior in modern society.
Take TikTok, for example. While there's nothing inherently wrong with the platform, it often promotes content that I find concerning – gender wars, gossip, and sometimes even false information. This issue extends to most social media platforms. Our kids are learning to dance, but they're not learning about money. They're absorbing various behaviors, but they don't understand inflation. We're turning to social media for education and information, which isn't necessarily bad, but it requires careful vetting of sources and filtering out the noise.
What's truly dangerous is how these platforms steal our attention subconsciously. We're wasting time, putting our attention in unproductive places without even realizing it. This is a toxic part of our culture today, and I believe Bitcoin can help solve this problem.
It's challenging to teach kids about money in today's fiat standard. How do you explain the concepts of time and energy when the measuring stick itself is constantly changing? But with Bitcoin, these lessons become tangible.
Moreover, Bitcoin opens up possibilities that seemed out of reach before, particularly in the realm of family banking. In the past, wealthy families in the United States understood the risks of keeping their family's worth in another person's hands or in external institutions. They recognized that protecting wealth comes with certain risks, and establishing a family protocol was crucial.
With Bitcoin, family banking becomes not just possible, but easily achievable and more secure than ever before. Through multi-signature wallet arrangements, we can create a true family bank where each family member holds a key to the family wallet. No funds can be moved without the permission of other family members. This technology eliminates what has been a major stumbling block in the past: trust issues. Now, we can have transparency, trust, and access for all family members.
Imagine a family account where parents and children alike have visibility into the family's finances, but movements of funds require agreement. This not only teaches financial responsibility but also fosters open communication about money within the family. It's a powerful tool for financial education and family cohesion.
This is something we should definitely take advantage of. It's not just about storing wealth; it's about creating a financial system that aligns with our family values and goals. Bitcoin provides the infrastructure for this new form of family banking, one that combines the best aspects of traditional family wealth management with the security and transparency of modern technology.
When it comes to teaching about compound interest and value, Bitcoin provides the perfect use case. Over its lifetime, Bitcoin has been averaging an astonishing 156% year-over-year return. While this number is significant, it's important to note that past performance doesn't guarantee future results. However, even with more conservative estimates, the power of compound interest becomes clear. I challenge you to use a compound interest calculator with a more modest 30% annual return. The results are still astronomical. Imagine teaching your children about saving and investing with potential returns like these. It's a powerful lesson in the value of long-term thinking and delayed gratification.
What I'm getting at is this: there are many forces in our society that hinder positive behavior and promote negative, toxic culture. These forces are stealing our children's time, attention, energy, and of course, money. Bitcoin is the antidote that can start correcting and curing some of that behavior, while also providing tangible lessons in finance and economics, and even reshaping how we approach family wealth management.
By making Bitcoin a central part of our family identity, we:
- Provide a shared purpose and legacy
- Encourage financial literacy from a young age
- Teach valuable lessons about time, energy, and value
- Promote long-term thinking and delayed gratification
- Offer an alternative to the instant gratification of social media
Bitcoin isn't just an investment or a technology – it's a paradigm shift that can reshape how we think about family, legacy, and education. It's an antidote to the toxic aspects of our digital age, offering a path to a more thoughtful, intentional way of living.
As we continue this journey, I invite you to consider: How could Bitcoin reshape your family's identity? What toxic behaviors could it help neutralize in your life? Until next time check out the below video on what some may know as the "Waterfall or "Rockefeller Method". https://youtu.be/MTpAY1LKfek?si=s7gA4bt_ZoRGAlAU
Stay sovereign, stay focused,
Joe Thomas Founder, The Bitcoin Blok
-
@ b0a838f2:34ed3f19
2025-05-23 17:49:07- chasquid - SMTP (email) server with a focus on simplicity, security, and ease of operation. (Source Code)
Apache-2.0
Go
- Courier MTA - Fast, scalable, enterprise mail/groupware server providing ESMTP, IMAP, POP3, webmail, mailing list, basic web-based calendaring and scheduling services. (Source Code)
GPL-3.0
C/deb
- DragonFly - A small MTA for home and office use. Works on Linux and FreeBSD.
BSD-3-Clause
C
- EmailRelay - A small and easy to configure SMTP and POP3 server for Windows and Linux. (Source Code)
GPL-3.0
C++
- Exim - Message transfer agent (MTA) developed at the University of Cambridge. (Source Code)
GPL-3.0
C/deb
- Haraka - Fast, highly extensible, and event driven SMTP server. (Source Code)
MIT
Nodejs
- MailCatcher - Deploy a simply SMTP MTA gateway that accepts all mail and displays in web interface. Useful for debugging or development. (Source Code)
MIT
Ruby
- OpenSMTPD - Secure SMTP server implementation from the OpenBSD project. (Source Code)
ISC
C/deb
- OpenTrashmail - Complete trashmail solution that exposes an SMTP server and has a web interface to manage received emails. Works with multiple and wildcard domains and is fully file based (no database needed). Includes RSS feeds and JSON API.
Apache-2.0
Python/PHP/Docker
- Postfix - Fast, easy to administer, and secure Sendmail replacement.
IPL-1.0
C/deb
- Sendmail - Message transfer agent (MTA).
Sendmail
C/deb
- chasquid - SMTP (email) server with a focus on simplicity, security, and ease of operation. (Source Code)