-
@ 21335073:a244b1ad
2025-05-09 13:56:57Someone asked for my thoughts, so I’ll share them thoughtfully. I’m not here to dictate how to promote Nostr—I’m still learning about it myself. While I’m not new to Nostr, freedom tech is a newer space for me. I’m skilled at advocating for topics I deeply understand, but freedom tech isn’t my expertise, so take my words with a grain of salt. Nothing I say is set in stone.
Those who need Nostr the most are the ones most vulnerable to censorship on other platforms right now. Reaching them requires real-time awareness of global issues and the dynamic relationships between governments and tech providers, which can shift suddenly. Effective Nostr promoters must grasp this and adapt quickly.
The best messengers are people from or closely tied to these at-risk regions—those who truly understand the local political and cultural dynamics. They can connect with those in need when tensions rise. Ideal promoters are rational, trustworthy, passionate about Nostr, but above all, dedicated to amplifying people’s voices when it matters most.
Forget influencers, corporate-backed figures, or traditional online PR—it comes off as inauthentic, corny, desperate and forced. Nostr’s promotion should be grassroots and organic, driven by a few passionate individuals who believe in Nostr and the communities they serve.
The idea that “people won’t join Nostr due to lack of reach” is nonsense. Everyone knows X’s “reach” is mostly with bots. If humans want real conversations, Nostr is the place. X is great for propaganda, but Nostr is for the authentic voices of the people.
Those spreading Nostr must be so passionate they’re willing to onboard others, which is time-consuming but rewarding for the right person. They’ll need to make Nostr and onboarding a core part of who they are. I see no issue with that level of dedication. I’ve been known to get that way myself at times. It’s fun for some folks.
With love, I suggest not adding Bitcoin promotion with Nostr outreach. Zaps already integrate that element naturally. (Still promote within the Bitcoin ecosystem, but this is about reaching vulnerable voices who needed Nostr yesterday.)
To promote Nostr, forget conventional strategies. “Influencers” aren’t the answer. “Influencers” are not the future. A trusted local community member has real influence—reach them. Connect with people seeking Nostr’s benefits but lacking the technical language to express it. This means some in the Nostr community might need to step outside of the Bitcoin bubble, which is uncomfortable but necessary. Thank you in advance to those who are willing to do that.
I don’t know who is paid to promote Nostr, if anyone. This piece isn’t shade. But it’s exhausting to see innocent voices globally silenced on corporate platforms like X while Nostr exists. Last night, I wondered: how many more voices must be censored before the Nostr community gets uncomfortable and thinks creatively to reach the vulnerable?
A warning: the global need for censorship-resistant social media is undeniable. If Nostr doesn’t make itself known, something else will fill that void. Let’s start this conversation.
-
@ d61f3bc5:0da6ef4a
2025-05-06 01:37:28I remember the first gathering of Nostr devs two years ago in Costa Rica. We were all psyched because Nostr appeared to solve the problem of self-sovereign online identity and decentralized publishing. The protocol seemed well-suited for textual content, but it wasn't really designed to handle binary files, like images or video.
The Problem
When I publish a note that contains an image link, the note itself is resilient thanks to Nostr, but if the hosting service disappears or takes my image down, my note will be broken forever. We need a way to publish binary data without relying on a single hosting provider.
We were discussing how there really was no reliable solution to this problem even outside of Nostr. Peer-to-peer attempts like IPFS simply didn't work; they were hopelessly slow and unreliable in practice. Torrents worked for popular files like movies, but couldn't be relied on for general file hosting.
Awesome Blossom
A year later, I attended the Sovereign Engineering demo day in Madeira, organized by Pablo and Gigi. Many projects were presented over a three hour demo session that day, but one really stood out for me.
Introduced by hzrd149 and Stu Bowman, Blossom blew my mind because it showed how we can solve complex problems easily by simply relying on the fact that Nostr exists. Having an open user directory, with the corresponding social graph and web of trust is an incredible building block.
Since we can easily look up any user on Nostr and read their profile metadata, we can just get them to simply tell us where their files are stored. This, combined with hash-based addressing (borrowed from IPFS), is all we need to solve our problem.
How Blossom Works
The Blossom protocol (Blobs Stored Simply on Mediaservers) is formally defined in a series of BUDs (Blossom Upgrade Documents). Yes, Blossom is the most well-branded protocol in the history of protocols. Feel free to refer to the spec for details, but I will provide a high level explanation here.
The main idea behind Blossom can be summarized in three points:
- Users specify which media server(s) they use via their public Blossom settings published on Nostr;
- All files are uniquely addressable via hashes;
- If an app fails to load a file from the original URL, it simply goes to get it from the server(s) specified in the user's Blossom settings.
Just like Nostr itself, the Blossom protocol is dead-simple and it works!
Let's use this image as an example:
If you look at the URL for this image, you will notice that it looks like this:
blossom.primal.net/c1aa63f983a44185d039092912bfb7f33adcf63ed3cae371ebe6905da5f688d0.jpg
All Blossom URLs follow this format:
[server]/[file-hash].[extension]
The file hash is important because it uniquely identifies the file in question. Apps can use it to verify that the file they received is exactly the file they requested. It also gives us the ability to reliably get the same file from a different server.
Nostr users declare which media server(s) they use by publishing their Blossom settings. If I store my files on Server A, and they get removed, I can simply upload them to Server B, update my public Blossom settings, and all Blossom-capable apps will be able to find them at the new location. All my existing notes will continue to display media content without any issues.
Blossom Mirroring
Let's face it, re-uploading files to another server after they got removed from the original server is not the best user experience. Most people wouldn't have the backups of all the files, and/or the desire to do this work.
This is where Blossom's mirroring feature comes handy. In addition to the primary media server, a Blossom user can set one one or more mirror servers. Under this setup, every time a file is uploaded to the primary server the Nostr app issues a mirror request to the primary server, directing it to copy the file to all the specified mirrors. This way there is always a copy of all content on multiple servers and in case the primary becomes unavailable, Blossom-capable apps will automatically start loading from the mirror.
Mirrors are really easy to setup (you can do it in two clicks in Primal) and this arrangement ensures robust media handling without any central points of failure. Note that you can use professional media hosting services side by side with self-hosted backup servers that anyone can run at home.
Using Blossom Within Primal
Blossom is natively integrated into the entire Primal stack and enabled by default. If you are using Primal 2.2 or later, you don't need to do anything to enable Blossom, all your media uploads are blossoming already.
To enhance user privacy, all Primal apps use the "/media" endpoint per BUD-05, which strips all metadata from uploaded files before they are saved and optionally mirrored to other Blossom servers, per user settings. You can use any Blossom server as your primary media server in Primal, as well as setup any number of mirrors:
## Conclusion
For such a simple protocol, Blossom gives us three major benefits:
- Verifiable authenticity. All Nostr notes are always signed by the note author. With Blossom, the signed note includes a unique hash for each referenced media file, making it impossible to falsify.
- File hosting redundancy. Having multiple live copies of referenced media files (via Blossom mirroring) greatly increases the resiliency of media content published on Nostr.
- Censorship resistance. Blossom enables us to seamlessly switch media hosting providers in case of censorship.
Thanks for reading; and enjoy! 🌸
-
@ 6e64b83c:94102ee8
2025-05-05 16:50:13Nostr-static is a powerful static site generator that transforms long-form Nostr content into beautiful, standalone websites. It makes your content accessible to everyone, even those not using Nostr clients. For more information check out my previous blog post How to Create a Blog Out of Nostr Long-Form Articles
What's New in Version 0.7?
RSS and Atom Feeds
Version 0.7 brings comprehensive feed support with both RSS and Atom formats. The system automatically generates feeds for your main content, individual profiles, and tag-specific pages. These feeds are seamlessly integrated into your site's header, making them easily discoverable by feed readers and content aggregators.
This feature bridges the gap between Nostr and traditional web publishing, allowing your content to reach readers who prefer feed readers or automated content distribution systems.
Smart Content Discovery
The new tag discovery system enhances your readers' experience by automatically finding and recommending relevant articles from the Nostr network. It works by:
- Analyzing the tags in your articles
- Fetching popular articles from Nostr that share these tags
- Using configurable weights to rank these articles based on:
- Engagement metrics (reactions, reposts, replies)
- Zap statistics (amount, unique zappers, average zap size)
- Content quality signals (report penalties)
This creates a dynamic "Recommended Articles" section that helps readers discover more content they might be interested in, all while staying within the Nostr ecosystem.
See the new features yourself by visiting our demo at: https://blog.nostrize.me
-
@ 6e0ea5d6:0327f353
2025-05-04 14:53:42Amico mio, ascolta bene!
Without hesitation, the woman you attract with lies is not truly yours. Davvero, she is the temporary property of the illusion you’ve built to seduce her. And every illusion, sooner or later, crumbles.
Weak men sell inflated versions of themselves. They talk about what they don’t have, promise what they can’t sustain, adorn their empty selves with words that are nothing more than a coat of paint. And they do this thinking that, later, they’ll be able to "show who they really are." Fatal mistake, cazzo!
The truth, amico mio, is not something that appears at the end. It is what holds up the whole beginning.
The woman who approaches a lie may smile at first — but she is smiling at the theater, not at the actor. When the curtains fall, what she will see is not a man. It will be a character tired of performing, begging for love from a self-serving audience in the front row.
That’s why I always point out that lying to win a woman’s heart is the same as sabotaging your own nature. The woman who comes through an invented version of you will be the first to leave when the veil of lies tears apart. Not out of cruelty, but out of consistency with her own interest. Fine... She didn’t leave you, but rather, that version of yourself never truly existed to be left behind.
A worthy man presents himself without deceptive adornments. And those who stay, stay because they know exactly who they are choosing as a man. That’s what differentiates forged seduction from the convenience of love built on honor, loyalty, and respect.
Ah, amico mio, I remember well. It was lunch on an autumn day in Catania. Mediterranean heat, and the Nero D'Avola wine from midday clinging to the lips like dried blood. Sitting in the shade of a lemon tree planted right by my grandfather's vineyard entrance, my uncle — the oldest of my father’s brothers — spoke little, but when he called us to sit by his side, all the nephews would quiet down to listen. And in my youth, he told me something that has never left my mind.
“In Sicily, the woman who endures the silence of a man about his business is more loyal than the one who is enchanted by speeches about what he does or how much he earns. Perchè, figlio mio, the first one has seen the truth. The second one, only a false shine.”
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 266815e0:6cd408a5
2025-05-02 22:24:59Its been six long months of refactoring code and building out to the applesauce packages but the app is stable enough for another release.
This update is pretty much a full rewrite of the non-visible parts of the app. all the background services were either moved out to the applesauce packages or rewritten, the result is that noStrudel is a little faster and much more consistent with connections and publishing.
New layout
The app has a new layout now, it takes advantage of the full desktop screen and looks a little better than it did before.
Removed NIP-72 communities
The NIP-72 communities are no longer part of the app, if you want to continue using them there are still a few apps that support them ( like satellite.earth ) but noStrudel won't support them going forward.
The communities where interesting but ultimately proved too have some fundamental flaws, most notably that all posts had to be approved by a moderator. There were some good ideas on how to improve it but they would have only been patches and wouldn't have fixed the underlying issues.
I wont promise to build it into noStrudel, but NIP-29 (relay based groups) look a lot more promising and already have better moderation abilities then NIP-72 communities could ever have.
Settings view
There is now a dedicated settings view, so no more hunting around for where the relays are set or trying to find how to add another account. its all in one place now
Cleaned up lists
The list views are a little cleaner now, and they have a simple edit modal
New emoji picker
Just another small improvement that makes the app feel more complete.
Experimental Wallet
There is a new "wallet" view in the app that lets you manage your NIP-60 cashu wallet. its very experimental and probably won't work for you, but its there and I hope to finish it up so the app can support NIP-61 nutzaps.
WARNING: Don't feed the wallet your hard earned sats, it will eat them!
Smaller improvements
- Added NSFW flag for replies
- Updated NIP-48 bunker login to work with new spec
- Linkfy BIPs
- Added 404 page
- Add NIP-22 comments under badges, files, and articles
- Add max height to timeline notes
- Fix articles view freezing on load
- Add option to mirror blobs when sharing notes
- Remove "open in drawer" for notes
-
@ 088436cd:9d2646cc
2025-05-01 21:01:55The arrival of the coronavirus brought not only illness and death but also fear and panic. In such an environment of uncertainty, people have naturally stocked up on necessities, not knowing when things will return to normal.
Retail shelves have been cleared out, and even online suppliers like Amazon and Walmart are out of stock for some items. Independent sellers on these e-commerce platforms have had to fill the gap. With the huge increase in demand, they have found that their inventory has skyrocketed in value.
Many in need of these items (e.g. toilet paper, hand sanitizer and masks) balk at the new prices. They feel they are being taken advantage of in a time of need and call for intervention by the government to lower prices. The government has heeded that call, labeling the independent sellers as "price gougers" and threatening sanctions if they don't lower their prices. Amazon has suspended seller accounts and law enforcement at all levels have threatened to prosecute. Prices have dropped as a result and at first glance this seems like a victory for fair play. But, we will have to dig deeper to understand the unseen consequences of this intervention.
We must look at the economics of the situation, how supply and demand result in a price and how that price acts as a signal that goes out to everyone, informing them of underlying conditions in the economy and helping coordinate their actions.
It all started with a rise in demand. Given a fixed supply (e.g., the limited stock on shelves and in warehouses), an increase in demand inevitably leads to higher prices. Most people are familiar with this phenomenon, such as paying more for airline tickets during holidays or surge pricing for rides.
Higher prices discourage less critical uses of scarce resources. For example, you might not pay $1,000 for a plane ticket to visit your aunt if you can get one for $100 the following week, but someone else might pay that price to visit a dying relative. They value that plane seat more than you.
*** During the crisis, demand surged and their shelves emptied even though
However, retail outlets have not raised prices. They have kept them low, so the low-value uses of things like toilet paper, masks and hand sanitizer has continued. Often, this "use" just takes the form of hoarding. At everyday low prices, it makes sense to buy hundreds of rolls and bottles. You know you will use them eventually, so why not stock up? And, with all those extra supplies in the closet and basement, you don't need to change your behavior much. You don't have to ration your use.
At the low prices, these scarce resources got bought up faster and faster until there was simply none left. The reality of the situation became painfully clear to those who didn't panic and got to the store late: You have no toilet paper and you're not going to any time soon.
However, if prices had been allowed to rise, a number of effects would have taken place that would have coordinated the behavior of everyone so that valuable resources would not have been wasted or hoarded, and everyone could have had access to what they needed.
On the demand side, if prices had been allowed to rise, people would have begun to self-ration. You might leave those extra plies on the roll next time if you know they will cost ten times as much to replace. Or, you might choose to clean up a spill with a rag rather than disposable tissue. Most importantly, you won't hoard as much. That 50th bottle of hand sanitizer might just not be worth it at the new, high price. You'll leave it on the shelf for someone else who may have none.
On the supply side, higher prices would have incentivized people to offer up more of their stockpiles for sale. If you have a pallet full of toilet paper in your basement and all of the sudden they are worth $15 per roll, you might just list a few online. But, if it is illegal to do so, you probably won't.
Imagine you run a business installing insulation and have a few thousand respirator masks on hand for your employees. During a pandemic, it is much more important that people breathe filtered air than that insulation get installed, and that fact is reflected in higher prices. You will sell your extra masks at the higher price rather than store them for future insulation jobs, and the scarce resource will be put to its most important use.
Producers of hand sanitizer would go into overdrive if prices were allowed to rise. They would pay their employees overtime, hire new ones, and pay a premium for their supplies, making sure their raw materials don't go to less important uses.
These kinds of coordinated actions all across the economy would be impossible without real prices to guide them. How do you know if it makes sense to spend an extra $10k bringing a thousand masks to market unless you know you can get more than $10 per mask? If the price is kept artificially low, you simply can't do it. The money just isn't there.
These are the immediate effects of a price change, but incredibly, price changes also coordinate people's actions across space and time.
Across space, there are different supply and demand conditions in different places, and thus prices are not uniform. We know some places are real "hot spots" for the virus, while others are mostly unaffected. High demand in the hot spots leads to higher prices there, which attracts more of the resource to those areas. Boxes and boxes of essential items would pour in where they are needed most from where they are needed least, but only if prices were allowed to adjust freely.
This would be accomplished by individuals and businesses buying low in the unaffected areas, selling high in the hot spots and subtracting their labor and transportation costs from the difference. Producers of new supply would know exactly where it is most needed and ship to the high-demand, high-price areas first. The effect of these actions is to increase prices in the low demand areas and reduce them in the high demand areas. People in the low demand areas will start to self-ration more, reflecting the reality of their neighbors, and people in the hotspots will get some relief.
However, by artificially suppressing prices in the hot spot, people there will simply buy up the available supply and run out, and it will be cost prohibitive to bring in new supply from low-demand areas.
Prices coordinate economic actions across time as well. Just as entrepreneurs and businesses can profit by transporting scarce necessities from low-demand to high-demand areas, they can also profit by buying in low-demand times and storing their merchandise for when it is needed most.
Just as allowing prices to freely adjust in one area relative to another will send all the right signals for the optimal use of a scarce resource, allowing prices to freely adjust over time will do the same.
When an entrepreneur buys up resources during low-demand times in anticipation of a crisis, she restricts supply ahead of the crisis, which leads to a price increase. She effectively bids up the price. The change in price affects consumers and producers in all the ways mentioned above. Consumers self-ration more, and producers bring more of the resource to market.
Our entrepreneur has done a truly incredible thing. She has predicted the future, and by so doing has caused every individual in the economy to prepare for a shortage they don't even know is coming! And, by discouraging consumption and encouraging production ahead of time, she blunts the impact the crisis will have. There will be more of the resource to go around when it is needed most.
On top of this, our entrepreneur still has her stockpile she saved back when everyone else was blithely using it up. She can now further mitigate the damage of the crisis by selling her stock during the worst of it, when people are most desperate for relief. She will know when this is because the price will tell her, but only if it is allowed to adjust freely. When the price is at its highest is when people need the resource the most, and those willing to pay will not waste it or hoard it. They will put it to its highest valued use.
The economy is like a big bus we are all riding in, going down a road with many twists and turns. Just as it is difficult to see into the future, it is difficult to see out the bus windows at the road ahead.
On the dashboard, we don't have a speedometer or fuel gauge. Instead we have all the prices for everything in the economy. Prices are what tell us the condition of the bus and the road. They tell us everything. Without them, we are blind.
Good times are a smooth road. Consumer prices and interest rates are low, investment returns are steady. We hit the gas and go fast. But, the road is not always straight and smooth. Sometimes there are sharp turns and rough patches. Successful entrepreneurs are the ones who can see what is coming better than everyone else. They are our navigators.
When they buy up scarce resources ahead of a crisis, they are hitting the brakes and slowing us down. When they divert resources from one area to another, they are steering us onto a smoother path. By their actions in the market, they adjust the prices on our dashboard to reflect the conditions of the road ahead, so we can prepare for, navigate and get through the inevitable difficulties we will face.
Interfering with the dashboard by imposing price floors or price caps doesn't change the conditions of the road (the number of toilet paper rolls in existence hasn't changed). All it does is distort our perception of those conditions. We think the road is still smooth--our heavy foot stomping the gas--as we crash onto a rocky dirt road at 80 miles per hour (empty shelves at the store for weeks on end).
Supply, demand and prices are laws of nature. All of this is just how things work. It isn't right or wrong in a moral sense. Price caps lead to waste, shortages and hoarding as surely as water flows downhill. The opposite--allowing prices to adjust freely--leads to conservation of scarce resources and their being put to their highest valued use. And yes, it leads to profits for the entrepreneurs who were able to correctly predict future conditions, and losses for those who weren't.
Is it fair that they should collect these profits? On the one hand, anyone could have stocked up on toilet paper, hand sanitizer and face masks at any time before the crisis, so we all had a fair chance to get the supplies cheaply. On the other hand, it just feels wrong that some should profit so much at a time when there is so much need.
Our instinct in the moment is to see the entrepreneur as a villain, greedy "price gouger". But we don't see the long chain of economic consequences the led to the situation we feel is unfair.
If it weren't for anti-price-gouging laws, the major retailers would have raised their prices long before the crisis became acute. When they saw demand outstrip supply, they would have raised prices, not by 100 fold, but gradually and long before anyone knew how serious things would have become. Late comers would have had to pay more, but at least there would be something left on the shelf.
As an entrepreneur, why take risks trying to anticipate the future if you can't reap the reward when you are right? Instead of letting instead of letting entrepreneurs--our navigators--guide us, we are punishing and vilifying them, trying to force prices to reflect a reality that simply doesn't exist.
In a crisis, more than any other time, prices must be allowed to fluctuate. To do otherwise is to blind ourselves at a time when danger and uncertainty abound. It is economic suicide.
In a crisis, there is great need, and the way to meet that need is not by pretending it's not there, by forcing prices to reflect a world where there isn't need. They way to meet the need is the same it has always been, through charity.
If the people in government want to help, the best way for the to do so is to be charitable and reduce their taxes and fees as much as possible, ideally to zero in a time of crisis. Amazon, for example, could instantly reduce the price of all crisis related necessities by 20% if they waived their fee. This would allow for more uses by more people of these scarce supplies as hoarders release their stockpiles on to the market, knowing they can get 20% more for their stock. Governments could reduce or eliminate their tax burden on high-demand, crisis-related items and all the factors that go into their production, with the same effect: a reduction in prices and expansion of supply. All of us, including the successful entrepreneurs and the wealthy for whom high prices are not a great burden, could donate to relief efforts.
These ideas are not new or untested. This is core micro economics. It has been taught for hundreds of years in universities the world over. The fact that every crisis that comes along stirs up ire against entrepreneurs indicates not that the economics is wrong, but that we have a strong visceral reaction against what we perceive to be unfairness. This is as it should be. Unfairness is wrong and the anger it stirs in us should compel us to right the wrong. Our anger itself isn't wrong, it's just misplaced.
Entrepreneurs didn't cause the prices to rise. Our reaction to a virus did that. We saw a serious threat and an uncertain future and followed our natural impulse to hoard. Because prices at major retail suppliers didn't rise, that impulse ran rampant and we cleared the shelves until there was nothing left. We ran the bus right off the road and them blamed the entrepreneurs for showing us the reality of our situation, for shaking us out of the fantasy of low prices.
All of this is not to say that entrepreneurs are high-minded public servants. They are just doing their job. Staking your money on an uncertain future is a risky business. There are big risks and big rewards. Most entrepreneurs just scrape by or lose their capital in failed ventures.
However, the ones that get it right must be allowed to keep their profits, or else no one will try and we'll all be driving blind. We need our navigators. It doesn't even matter if they know all the positive effects they are having on the rest of us and the economy as a whole. So long as they are buying low and selling high--so long as they are doing their job--they will be guiding the rest of us through the good times and the bad, down the open road and through the rough spots.
-
@ 21335073:a244b1ad
2025-05-01 01:51:10Please respect Virginia Giuffre’s memory by refraining from asking about the circumstances or theories surrounding her passing.
Since Virginia Giuffre’s death, I’ve reflected on what she would want me to say or do. This piece is my attempt to honor her legacy.
When I first spoke with Virginia, I was struck by her unshakable hope. I had grown cynical after years in the anti-human trafficking movement, worn down by a broken system and a government that often seemed complicit. But Virginia’s passion, creativity, and belief that survivors could be heard reignited something in me. She reminded me of my younger, more hopeful self. Instead of warning her about the challenges ahead, I let her dream big, unburdened by my own disillusionment. That conversation changed me for the better, and following her lead led to meaningful progress.
Virginia was one of the bravest people I’ve ever known. As a survivor of Epstein, Maxwell, and their co-conspirators, she risked everything to speak out, taking on some of the world’s most powerful figures.
She loved when I said, “Epstein isn’t the only Epstein.” This wasn’t just about one man—it was a call to hold all abusers accountable and to ensure survivors find hope and healing.
The Epstein case often gets reduced to sensational details about the elite, but that misses the bigger picture. Yes, we should be holding all of the co-conspirators accountable, we must listen to the survivors’ stories. Their experiences reveal how predators exploit vulnerabilities, offering lessons to prevent future victims.
You’re not powerless in this fight. Educate yourself about trafficking and abuse—online and offline—and take steps to protect those around you. Supporting survivors starts with small, meaningful actions. Free online resources can guide you in being a safe, supportive presence.
When high-profile accusations arise, resist snap judgments. Instead of dismissing survivors as “crazy,” pause to consider the trauma they may be navigating. Speaking out or coping with abuse is never easy. You don’t have to believe every claim, but you can refrain from attacking accusers online.
Society also fails at providing aftercare for survivors. The government, often part of the problem, won’t solve this. It’s up to us. Prevention is critical, but when abuse occurs, step up for your loved ones and community. Protect the vulnerable. it’s a challenging but a rewarding journey.
If you’re contributing to Nostr, you’re helping build a censorship resistant platform where survivors can share their stories freely, no matter how powerful their abusers are. Their voices can endure here, offering strength and hope to others. This gives me great hope for the future.
Virginia Giuffre’s courage was a gift to the world. It was an honor to know and serve her. She will be deeply missed. My hope is that her story inspires others to take on the powerful.
-
@ a39d19ec:3d88f61e
2025-04-22 12:44:42Die Debatte um Migration, Grenzsicherung und Abschiebungen wird in Deutschland meist emotional geführt. Wer fordert, dass illegale Einwanderer abgeschoben werden, sieht sich nicht selten dem Vorwurf des Rassismus ausgesetzt. Doch dieser Vorwurf ist nicht nur sachlich unbegründet, sondern verkehrt die Realität ins Gegenteil: Tatsächlich sind es gerade diejenigen, die hinter jeder Forderung nach Rechtssicherheit eine rassistische Motivation vermuten, die selbst in erster Linie nach Hautfarbe, Herkunft oder Nationalität urteilen.
Das Recht steht über Emotionen
Deutschland ist ein Rechtsstaat. Das bedeutet, dass Regeln nicht nach Bauchgefühl oder politischer Stimmungslage ausgelegt werden können, sondern auf klaren gesetzlichen Grundlagen beruhen müssen. Einer dieser Grundsätze ist in Artikel 16a des Grundgesetzes verankert. Dort heißt es:
„Auf Absatz 1 [Asylrecht] kann sich nicht berufen, wer aus einem Mitgliedstaat der Europäischen Gemeinschaften oder aus einem anderen Drittstaat einreist, in dem die Anwendung des Abkommens über die Rechtsstellung der Flüchtlinge und der Europäischen Menschenrechtskonvention sichergestellt ist.“
Das bedeutet, dass jeder, der über sichere Drittstaaten nach Deutschland einreist, keinen Anspruch auf Asyl hat. Wer dennoch bleibt, hält sich illegal im Land auf und unterliegt den geltenden Regelungen zur Rückführung. Die Forderung nach Abschiebungen ist daher nichts anderes als die Forderung nach der Einhaltung von Recht und Gesetz.
Die Umkehrung des Rassismusbegriffs
Wer einerseits behauptet, dass das deutsche Asyl- und Aufenthaltsrecht strikt durchgesetzt werden soll, und andererseits nicht nach Herkunft oder Hautfarbe unterscheidet, handelt wertneutral. Diejenigen jedoch, die in einer solchen Forderung nach Rechtsstaatlichkeit einen rassistischen Unterton sehen, projizieren ihre eigenen Denkmuster auf andere: Sie unterstellen, dass die Debatte ausschließlich entlang ethnischer, rassistischer oder nationaler Kriterien geführt wird – und genau das ist eine rassistische Denkweise.
Jemand, der illegale Einwanderung kritisiert, tut dies nicht, weil ihn die Herkunft der Menschen interessiert, sondern weil er den Rechtsstaat respektiert. Hingegen erkennt jemand, der hinter dieser Kritik Rassismus wittert, offenbar in erster Linie die „Rasse“ oder Herkunft der betreffenden Personen und reduziert sie darauf.
Finanzielle Belastung statt ideologischer Debatte
Neben der rechtlichen gibt es auch eine ökonomische Komponente. Der deutsche Wohlfahrtsstaat basiert auf einem Solidarprinzip: Die Bürger zahlen in das System ein, um sich gegenseitig in schwierigen Zeiten zu unterstützen. Dieser Wohlstand wurde über Generationen hinweg von denjenigen erarbeitet, die hier seit langem leben. Die Priorität liegt daher darauf, die vorhandenen Mittel zuerst unter denjenigen zu verteilen, die durch Steuern, Sozialabgaben und Arbeit zum Erhalt dieses Systems beitragen – nicht unter denen, die sich durch illegale Einreise und fehlende wirtschaftliche Eigenleistung in das System begeben.
Das ist keine ideologische Frage, sondern eine rein wirtschaftliche Abwägung. Ein Sozialsystem kann nur dann nachhaltig funktionieren, wenn es nicht unbegrenzt belastet wird. Würde Deutschland keine klaren Regeln zur Einwanderung und Abschiebung haben, würde dies unweigerlich zur Überlastung des Sozialstaates führen – mit negativen Konsequenzen für alle.
Sozialpatriotismus
Ein weiterer wichtiger Aspekt ist der Schutz der Arbeitsleistung jener Generationen, die Deutschland nach dem Zweiten Weltkrieg mühsam wieder aufgebaut haben. Während oft betont wird, dass die Deutschen moralisch kein Erbe aus der Zeit vor 1945 beanspruchen dürfen – außer der Verantwortung für den Holocaust –, ist es umso bedeutsamer, das neue Erbe nach 1945 zu respektieren, das auf Fleiß, Disziplin und harter Arbeit beruht. Der Wiederaufbau war eine kollektive Leistung deutscher Menschen, deren Früchte nicht bedenkenlos verteilt werden dürfen, sondern vorrangig denjenigen zugutekommen sollten, die dieses Fundament mitgeschaffen oder es über Generationen mitgetragen haben.
Rechtstaatlichkeit ist nicht verhandelbar
Wer sich für eine konsequente Abschiebepraxis ausspricht, tut dies nicht aus rassistischen Motiven, sondern aus Respekt vor der Rechtsstaatlichkeit und den wirtschaftlichen Grundlagen des Landes. Der Vorwurf des Rassismus in diesem Kontext ist daher nicht nur falsch, sondern entlarvt eine selektive Wahrnehmung nach rassistischen Merkmalen bei denjenigen, die ihn erheben.
-
@ d360efec:14907b5f
2025-05-10 03:57:17Disclaimer: * การวิเคราะห์นี้เป็นเพียงแนวทาง ไม่ใช่คำแนะนำในการซื้อขาย * การลงทุนมีความเสี่ยง ผู้ลงทุนควรตัดสินใจด้วยตนเอง
-
@ a39d19ec:3d88f61e
2025-03-18 17:16:50Nun da das deutsche Bundesregime den Ruin Deutschlands beschlossen hat, der sehr wahrscheinlich mit dem Werkzeug des Geld druckens "finanziert" wird, kamen mir so viele Gedanken zur Geldmengenausweitung, dass ich diese für einmal niedergeschrieben habe.
Die Ausweitung der Geldmenge führt aus klassischer wirtschaftlicher Sicht immer zu Preissteigerungen, weil mehr Geld im Umlauf auf eine begrenzte Menge an Gütern trifft. Dies lässt sich in mehreren Schritten analysieren:
1. Quantitätstheorie des Geldes
Die klassische Gleichung der Quantitätstheorie des Geldes lautet:
M • V = P • Y
wobei:
- M die Geldmenge ist,
- V die Umlaufgeschwindigkeit des Geldes,
- P das Preisniveau,
- Y die reale Wirtschaftsleistung (BIP).Wenn M steigt und V sowie Y konstant bleiben, muss P steigen – also Inflation entstehen.
2. Gütermenge bleibt begrenzt
Die Menge an real produzierten Gütern und Dienstleistungen wächst meist nur langsam im Vergleich zur Ausweitung der Geldmenge. Wenn die Geldmenge schneller steigt als die Produktionsgütermenge, führt dies dazu, dass mehr Geld für die gleiche Menge an Waren zur Verfügung steht – die Preise steigen.
3. Erwartungseffekte und Spekulation
Wenn Unternehmen und Haushalte erwarten, dass mehr Geld im Umlauf ist, da eine zentrale Planung es so wollte, können sie steigende Preise antizipieren. Unternehmen erhöhen ihre Preise vorab, und Arbeitnehmer fordern höhere Löhne. Dies kann eine sich selbst verstärkende Spirale auslösen.
4. Internationale Perspektive
Eine erhöhte Geldmenge kann die Währung abwerten, wenn andere Länder ihre Geldpolitik stabil halten. Eine schwächere Währung macht Importe teurer, was wiederum Preissteigerungen antreibt.
5. Kritik an der reinen Geldmengen-Theorie
Der Vollständigkeit halber muss erwähnt werden, dass die meisten modernen Ökonomen im Staatsauftrag argumentieren, dass Inflation nicht nur von der Geldmenge abhängt, sondern auch von der Nachfrage nach Geld (z. B. in einer Wirtschaftskrise). Dennoch zeigt die historische Erfahrung, dass eine unkontrollierte Geldmengenausweitung langfristig immer zu Preissteigerungen führt, wie etwa in der Hyperinflation der Weimarer Republik oder in Simbabwe.
-
@ 9967f375:04f9a5e1
2025-05-10 03:55:38El insigne mendocino Juan Fernando Segovia falleció en el día de la festividad de Nuestra Señora de Luján, fecha significativa no sólo por ser la Patrona ríoplatense, sino además, anecdóticamente, porque cuando conocimos al profesor en tierras mejicanas tlaxcaltecas, escuchando la Santa Misa en la catedral (oficiada por el P. José Ramón García Gallardo); en un altar lateral coronandolo se encuentra una imágen de la Virgen muy parecida a la advocación, al hacerlo notar al profesor, la contempló sonriente y con mucha Fe. Este breve hecho significativo, sin duda fundamentó la labor abierta de Juan Fernando Segovia de ser fiel apóstol intelectual por los pueblos hispánicos, pues no sólo vino a compartir su sabiduría política, jurídica y moral, además de su genial convivencia con los asistentes en una, sino en las tres ediciones de las Conversaciones de la Ciudad Católica de Tlaxcala, haciendo notar la sinrazón de la cerrazón propia del nacionalismo (sin importar el apellido que le acompañe). Los Círculos Tradicionalistas de toda la Nueva España (cuyos buenos frutos son en gran parte obra de la labor incansable, conferencias presenciales y virtuales, libros y artículos de revistas para la conformación y formación continua de sus miembros), lamenta el vacío por la partida del maestro Juan Fernando Segovia.
Que Cristo Rey, a quien fielmente sirvió durante su vida, le dé el descanso eterno.
(Juan Fernando Segovia al centro, junto al P. José Ramón García y el matador Jerónimo Ramírez de Arellano en Tlaxcala, 2018).
-
@ b8851a06:9b120ba1
2025-05-09 22:54:43The global financial system is creaking under its own weight. The IMF is urging banks to shore up capital, cut risk, and brace for impact. Basel III is their answer, a last-ditch effort to reinforce a brittle foundation.
But behind the scenes, a quieter revolution is under way.
Bitcoin, the world’s first stateless digital asset, is no longer on the sidelines. It’s entering the Basel conversation: not by invitation, but by inevitability.
Basel III: The System’s Self-Diagnosis
Basel III is more than a technical rulebook. It’s a confession: an admission that the global banking system is vulnerable. Created in the aftermath of 2008, it calls for: • Stronger capital reserves: So banks can survive losses. • Lower leverage: To reduce the domino effect of overexposure. • Liquidity buffers: To weather short-term shocks without collapsing.
But here’s the kicker: these rules are hostile to anything outside the fiat system. Bitcoin gets hit with a punitive 1,250% risk weight. That means for every $1 of exposure, banks must hold $1 in capital. The message from regulators? “You can hold Bitcoin, but you’ll pay for it.”
Yet that fear: based framing misses a bigger truth: Bitcoin doesn’t just survive in this environment. It thrives in it.
Bitcoin: A Parallel System, Built on Hard Rules
Where Basel III imposes “fiat discipline” from the top down, Bitcoin enforces it from the bottom up: with code, math, and transparency.
Bitcoin is not just a hedge. It’s a structural antidote to systemic fragility.
Volatility: A Strategic Asset
Yes, Bitcoin is volatile. But in a system that devalues fiat on a schedule, volatility is simply the cost of freedom. Under Basel III, banks are expected to build capital buffers during economic expansions.
What asset allows you to build those buffers faster than Bitcoin in a bull market?
When the cycle turns, those reserves act as shock absorbers: converting volatility into resilience. It’s anti-fragility in motion.
Liquidity: Real, Deep, and Global
Bitcoin settled over $19 trillion in transactions in 2024. That’s not hypothetical liquidity. it’s real, measurable flow. Unlike traditional high-quality liquid assets (HQLAs), Bitcoin is: • Available 24/7 • Borderless • Not dependent on central banks
By traditional definitions, Bitcoin is rapidly qualifying for HQLA status. Even if regulators aren’t ready to admit it.
Diversification: Breaking the Fiat Dependency
Basel III is designed to pull banks back into the fiat matrix. But Bitcoin offers an escape hatch. Strategic Bitcoin reserves are not about speculation, they’re insurance. For family offices, institutions, and sovereign funds, Bitcoin is the lifeboat when the fiat ship starts taking on water.
Regulatory Realignment: The System Reacts
The Basel Committee’s new rules on crypto exposures went live in January 2025. Around the world, regulators are scrambling to define their stance. Every new restriction placed on Bitcoin only strengthens its legitimacy, as more institutions ask: Why so much resistance, if it’s not a threat?
Bitcoin doesn’t need permission. It’s already being adopted by over 150 public companies, forward-looking states, and a new class of self-sovereign individuals.
Conclusion: The Real Question
This isn’t just about Bitcoin fitting into Basel III.
The real question is: How long can Basel III remain relevant in a world where Bitcoin exists?
Bitcoin is not the risk. It’s the reality check. And it might just be the strongest capital buffer the system has ever seen.
Gradually then suddenly.
-
@ a008def1:57a3564d
2025-04-30 17:52:11A Vision for #GitViaNostr
Git has long been the standard for version control in software development, but over time, we has lost its distributed nature. Originally, Git used open, permissionless email for collaboration, which worked well at scale. However, the rise of GitHub and its centralized pull request (PR) model has shifted the landscape.
Now, we have the opportunity to revive Git's permissionless and distributed nature through Nostr!
We’ve developed tools to facilitate Git collaboration via Nostr, but there are still significant friction that prevents widespread adoption. This article outlines a vision for how we can reduce those barriers and encourage more repositories to embrace this approach.
First, we’ll review our progress so far. Then, we’ll propose a guiding philosophy for our next steps. Finally, we’ll discuss a vision to tackle specific challenges, mainly relating to the role of the Git server and CI/CD.
I am the lead maintainer of ngit and gitworkshop.dev, and I’ve been fortunate to work full-time on this initiative for the past two years, thanks to an OpenSats grant.
How Far We’ve Come
The aim of #GitViaNostr is to liberate discussions around code collaboration from permissioned walled gardens. At the core of this collaboration is the process of proposing and applying changes. That's what we focused on first.
Since Nostr shares characteristics with email, and with NIP34, we’ve adopted similar primitives to those used in the patches-over-email workflow. This is because of their simplicity and that they don’t require contributors to host anything, which adds reliability and makes participation more accessible.
However, the fork-branch-PR-merge workflow is the only model many developers have known, and changing established workflows can be challenging. To address this, we developed a new workflow that balances familiarity, user experience, and alignment with the Nostr protocol: the branch-PR-merge model.
This model is implemented in ngit, which includes a Git plugin that allows users to engage without needing to learn new commands. Additionally, gitworkshop.dev offers a GitHub-like interface for interacting with PRs and issues. We encourage you to try them out using the quick start guide and share your feedback. You can also explore PRs and issues with gitplaza.
For those who prefer the patches-over-email workflow, you can still use that approach with Nostr through gitstr or the
ngit send
andngit list
commands, and explore patches with patch34.The tools are now available to support the core collaboration challenge, but we are still at the beginning of the adoption curve.
Before we dive into the challenges—such as why the Git server setup can be jarring and the possibilities surrounding CI/CD—let’s take a moment to reflect on how we should approach the challenges ahead of us.
Philosophy
Here are some foundational principles I shared a few years ago:
- Let Git be Git
- Let Nostr be Nostr
- Learn from the successes of others
I’d like to add one more:
- Embrace anarchy and resist monolithic development.
Micro Clients FTW
Nostr celebrates simplicity, and we should strive to maintain that. Monolithic developments often lead to unnecessary complexity. Projects like gitworkshop.dev, which aim to cover various aspects of the code collaboration experience, should not stifle innovation.
Just yesterday, the launch of following.space demonstrated how vibe-coded micro clients can make a significant impact. They can be valuable on their own, shape the ecosystem, and help push large and widely used clients to implement features and ideas.
The primitives in NIP34 are straightforward, and if there are any barriers preventing the vibe-coding of a #GitViaNostr app in an afternoon, we should work to eliminate them.
Micro clients should lead the way and explore new workflows, experiences, and models of thinking.
Take kanbanstr.com. It provides excellent project management and organization features that work seamlessly with NIP34 primitives.
From kanban to code snippets, from CI/CD runners to SatShoot—may a thousand flowers bloom, and a thousand more after them.
Friction and Challenges
The Git Server
In #GitViaNostr, maintainers' branches (e.g.,
master
) are hosted on a Git server. Here’s why this approach is beneficial:- Follows the original Git vision and the "let Git be Git" philosophy.
- Super efficient, battle-tested, and compatible with all the ways people use Git (e.g., LFS, shallow cloning).
- Maintains compatibility with related systems without the need for plugins (e.g., for build and deployment).
- Only repository maintainers need write access.
In the original Git model, all users would need to add the Git server as a 'git remote.' However, with ngit, the Git server is hidden behind a Nostr remote, which enables:
- Hiding complexity from contributors and users, so that only maintainers need to know about the Git server component to start using #GitViaNostr.
- Maintainers can easily swap Git servers by updating their announcement event, allowing contributors/users using ngit to automatically switch to the new one.
Challenges with the Git Server
While the Git server model has its advantages, it also presents several challenges:
- Initial Setup: When creating a new repository, maintainers must select a Git server, which can be a jarring experience. Most options come with bloated social collaboration features tied to a centralized PR model, often difficult or impossible to disable.
-
Manual Configuration: New repositories require manual configuration, including adding new maintainers through a browser UI, which can be cumbersome and time-consuming.
-
User Onboarding: Many Git servers require email sign-up or KYC (Know Your Customer) processes, which can be a significant turn-off for new users exploring a decentralized and permissionless alternative to GitHub.
Once the initial setup is complete, the system works well if a reliable Git server is chosen. However, this is a significant "if," as we have become accustomed to the excellent uptime and reliability of GitHub. Even professionally run alternatives like Codeberg can experience downtime, which is frustrating when CI/CD and deployment processes are affected. This problem is exacerbated when self-hosting.
Currently, most repositories on Nostr rely on GitHub as the Git server. While maintainers can change servers without disrupting their contributors, this reliance on a centralized service is not the decentralized dream we aspire to achieve.
Vision for the Git Server
The goal is to transform the Git server from a single point of truth and failure into a component similar to a Nostr relay.
Functionality Already in ngit to Support This
-
State on Nostr: Store the state of branches and tags in a Nostr event, removing reliance on a single server. This validates that the data received has been signed by the maintainer, significantly reducing the trust requirement.
-
Proxy to Multiple Git Servers: Proxy requests to all servers listed in the announcement event, adding redundancy and eliminating the need for any one server to match GitHub's reliability.
Implementation Requirements
To achieve this vision, the Nostr Git server implementation should:
-
Implement the Git Smart HTTP Protocol without authentication (no SSH) and only accept pushes if the reference tip matches the latest state event.
-
Avoid Bloat: There should be no user authentication, no database, no web UI, and no unnecessary features.
-
Automatic Repository Management: Accept or reject new repositories automatically upon the first push based on the content of the repository announcement event referenced in the URL path and its author.
Just as there are many free, paid, and self-hosted relays, there will be a variety of free, zero-step signup options, as well as self-hosted and paid solutions.
Some servers may use a Web of Trust (WoT) to filter out spam, while others might impose bandwidth or repository size limits for free tiers or whitelist specific npubs.
Additionally, some implementations could bundle relay and blossom server functionalities to unify the provision of repository data into a single service. These would likely only accept content related to the stored repositories rather than general social nostr content.
The potential role of CI / CD via nostr DVMs could create the incentives for a market of highly reliable free at the point of use git servers.
This could make onboarding #GitViaNostr repositories as easy as entering a name and selecting from a multi-select list of Git server providers that announce via NIP89.
!(image)[https://image.nostr.build/badedc822995eb18b6d3c4bff0743b12b2e5ac018845ba498ce4aab0727caf6c.jpg]
Git Client in the Browser
Currently, many tasks are performed on a Git server web UI, such as:
- Browsing code, commits, branches, tags, etc.
- Creating and displaying permalinks to specific lines in commits.
- Merging PRs.
- Making small commits and PRs on-the-fly.
Just as nobody goes to the web UI of a relay (e.g., nos.lol) to interact with notes, nobody should need to go to a Git server to interact with repositories. We use the Nostr protocol to interact with Nostr relays, and we should use the Git protocol to interact with Git servers. This situation has evolved due to the centralization of Git servers. Instead of being restricted to the view and experience designed by the server operator, users should be able to choose the user experience that works best for them from a range of clients. To facilitate this, we need a library that lowers the barrier to entry for creating these experiences. This library should not require a full clone of every repository and should not depend on proprietary APIs. As a starting point, I propose wrapping the WASM-compiled gitlib2 library for the web and creating useful functions, such as showing a file, which utilizes clever flags to minimize bandwidth usage (e.g., shallow clone, noblob, etc.).
This approach would not only enhance clients like gitworkshop.dev but also bring forth a vision where Git servers simply run the Git protocol, making vibe coding Git experiences even better.
song
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 created song with a complementary vision that has shaped how I see the role of the git server. Its a self-hosted, nostr-permissioned git server with a relay baked in. Its currently a WIP and there are some compatability with ngit that we need to work out.
We collaborated on the nostr-permissioning approach now reflected in nip34.
I'm really excited to see how this space evolves.
CI/CD
Most projects require CI/CD, and while this is often bundled with Git hosting solutions, it is currently not smoothly integrated into #GitViaNostr yet. There are many loosely coupled options, such as Jenkins, Travis, CircleCI, etc., that could be integrated with Nostr.
However, the more exciting prospect is to use DVMs (Data Vending Machines).
DVMs for CI/CD
Nostr Data Vending Machines (DVMs) can provide a marketplace of CI/CD task runners with Cashu for micro payments.
There are various trust levels in CI/CD tasks:
- Tasks with no secrets eg. tests.
- Tasks using updatable secrets eg. API keys.
- Unverifiable builds and steps that sign with Android, Nostr, or PGP keys.
DVMs allow tasks to be kicked off with specific providers using a Cashu token as payment.
It might be suitable for some high-compute and easily verifiable tasks to be run by the cheapest available providers. Medium trust tasks could be run by providers with a good reputation, while high trust tasks could be run on self-hosted runners.
Job requests, status, and results all get published to Nostr for display in Git-focused Nostr clients.
Jobs could be triggered manually, or self-hosted runners could be configured to watch a Nostr repository and kick off jobs using their own runners without payment.
But I'm most excited about the prospect of Watcher Agents.
CI/CD Watcher Agents
AI agents empowered with a NIP60 Cashu wallet can run tasks based on activity, such as a push to master or a new PR, using the most suitable available DVM runner that meets the user's criteria. To keep them running, anyone could top up their NIP60 Cashu wallet; otherwise, the watcher turns off when the funds run out. It could be users, maintainers, or anyone interested in helping the project who could top up the Watcher Agent's balance.
As aluded to earlier, part of building a reputation as a CI/CD provider could involve running reliable hosting (Git server, relay, and blossom server) for all FOSS Nostr Git repositories.
This provides a sustainable revenue model for hosting providers and creates incentives for many free-at-the-point-of-use hosting providers. This, in turn, would allow one-click Nostr repository creation workflows, instantly hosted by many different providers.
Progress to Date
nostr:npub1hw6amg8p24ne08c9gdq8hhpqx0t0pwanpae9z25crn7m9uy7yarse465gr and nostr:npub16ux4qzg4qjue95vr3q327fzata4n594c9kgh4jmeyn80v8k54nhqg6lra7 have been working on a runner that uses GitHub Actions YAML syntax (using act) for the dvm-cicd-runner and takes Cashu payment. You can see example runs on GitWorkshop. It currently takes testnuts, doesn't give any change, and the schema will likely change.
Note: The actions tab on GitWorkshop is currently available on all repositories if you turn on experimental mode (under settings in the user menu).
It's a work in progress, and we expect the format and schema to evolve.
Easy Web App Deployment
For those disapointed not to find a 'Nostr' button to import a git repository to Vercel menu: take heart, they made it easy. vercel.com_import_options.png there is a vercel cli that can be easily called in CI / CD jobs to kick of deployments. Not all managed solutions for web app deployment (eg. netlify) make it that easy.
Many More Opportunities
Large Patches via Blossom
I would be remiss not to mention the large patch problem. Some patches are too big to fit into Nostr events. Blossom is perfect for this, as it allows these larger patches to be included in a blossom file and referenced in a new patch kind.
Enhancing the #GitViaNostr Experience
Beyond the large patch issue, there are numerous opportunities to enhance the #GitViaNostr ecosystem. We can focus on improving browsing, discovery, social and notifications. Receiving notifications on daily driver Nostr apps is one of the killer features of Nostr. However, we must ensure that Git-related notifications are easily reviewable, so we don’t miss any critical updates.
We need to develop tools that cater to our curiosity—tools that enable us to discover and follow projects, engage in discussions that pique our interest, and stay informed about developments relevant to our work.
Additionally, we should not overlook the importance of robust search capabilities and tools that facilitate migrations.
Concluding Thoughts
The design space is vast. Its an exciting time to be working on freedom tech. I encourage everyone to contribute their ideas and creativity and get vibe-coding!
I welcome your honest feedback on this vision and any suggestions you might have. Your insights are invaluable as we collaborate to shape the future of #GitViaNostr. Onward.
Contributions
To conclude, I want to acknowledge some the individuals who have made recent code contributions related to #GitViaNostr:
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 (gitstr, song, patch34), nostr:npub1useke4f9maul5nf67dj0m9sq6jcsmnjzzk4ycvldwl4qss35fvgqjdk5ks (gitplaza)
nostr:npub1elta7cneng3w8p9y4dw633qzdjr4kyvaparuyuttyrx6e8xp7xnq32cume (ngit contributions, git-remote-blossom),nostr:npub16p8v7varqwjes5hak6q7mz6pygqm4pwc6gve4mrned3xs8tz42gq7kfhdw (SatShoot, Flotilla-Budabit), nostr:npub1ehhfg09mr8z34wz85ek46a6rww4f7c7jsujxhdvmpqnl5hnrwsqq2szjqv (Flotilla-Budabit, Nostr Git Extension), nostr:npub1ahaz04ya9tehace3uy39hdhdryfvdkve9qdndkqp3tvehs6h8s5slq45hy (gnostr and experiments), and others.
nostr:npub1uplxcy63up7gx7cladkrvfqh834n7ylyp46l3e8t660l7peec8rsd2sfek (git-remote-nostr)
Project Management nostr:npub1ltx67888tz7lqnxlrg06x234vjnq349tcfyp52r0lstclp548mcqnuz40t (kanbanstr) Code Snippets nostr:npub1ygzj9skr9val9yqxkf67yf9jshtyhvvl0x76jp5er09nsc0p3j6qr260k2 (nodebin.io) nostr:npub1r0rs5q2gk0e3dk3nlc7gnu378ec6cnlenqp8a3cjhyzu6f8k5sgs4sq9ac (snipsnip.dev)
CI / CD nostr:npub16ux4qzg4qjue95vr3q327fzata4n594c9kgh4jmeyn80v8k54nhqg6lra7 nostr:npub1hw6amg8p24ne08c9gdq8hhpqx0t0pwanpae9z25crn7m9uy7yarse465gr
and for their nostr:npub1c03rad0r6q833vh57kyd3ndu2jry30nkr0wepqfpsm05vq7he25slryrnw nostr:npub1qqqqqq2stely3ynsgm5mh2nj3v0nk5gjyl3zqrzh34hxhvx806usxmln03 and nostr:npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z for their testing, feedback, ideas and encouragement.
Thank you for your support and collaboration! Let me know if I've missed you.
-
@ 0d97beae:c5274a14
2025-01-11 16:52:08This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
@ 37fe9853:bcd1b039
2025-01-11 15:04:40yoyoaa
-
@ 6be5cc06:5259daf0
2025-04-28 01:05:49Eu reconheço que Deus, e somente Deus, é o soberano legítimo sobre todas as coisas. Nenhum homem, nenhuma instituição, nenhum parlamento tem autoridade para usurpar aquilo que pertence ao Rei dos reis. O Estado moderno, com sua pretensão totalizante, é uma farsa blasfema diante do trono de Cristo. Não aceito outro senhor.
A Lei que me guia não é a ditada por burocratas, mas a gravada por Deus na própria natureza humana. A razão, quando iluminada pela fé, é suficiente para discernir o que é justo. Rejeito as leis arbitrárias que pretendem legitimar o roubo, o assassinato ou a escravidão em nome da ordem. A justiça não nasce do decreto, mas da verdade.
Acredito firmemente na propriedade privada como extensão da própria pessoa. Aquilo que é fruto do meu trabalho, da minha criatividade, da minha dedicação, dos dons a mim concedidos por Deus, pertence a mim por direito natural. Ninguém pode legitimamente tomar o que é meu sem meu consentimento. Todo imposto é uma agressão; toda expropriação, um roubo. Defendo a liberdade econômica não por idolatria ao mercado, mas porque a liberdade é condição necessária para a virtude.
Assumo o Princípio da Não Agressão como o mínimo ético que devo respeitar. Não iniciarei o uso da força contra ninguém, nem contra sua propriedade. Exijo o mesmo de todos. Mas sei que isso não basta. O PNA delimita o que não devo fazer — ele não me ensina o que devo ser. A liberdade exterior só é boa se houver liberdade interior. O mercado pode ser livre, mas se a alma estiver escravizada pelo vício, o colapso será inevitável.
Por isso, não me basta a ética negativa. Creio que uma sociedade justa precisa de valores positivos: honra, responsabilidade, compaixão, respeito, fidelidade à verdade. Sem isso, mesmo uma sociedade que respeite formalmente os direitos individuais apodrecerá por dentro. Um povo que ama o lucro, mas despreza a verdade, que celebra a liberdade mas esquece a justiça, está se preparando para ser dominado. Trocará um déspota visível por mil tiranias invisíveis — o hedonismo, o consumismo, a mentira, o medo.
Não aceito a falsa caridade feita com o dinheiro tomado à força. A verdadeira generosidade nasce do coração livre, não da coerção institucional. Obrigar alguém a ajudar o próximo destrói tanto a liberdade quanto a virtude. Só há mérito onde há escolha. A caridade que nasce do amor é redentora; a que nasce do fisco é propaganda.
O Estado moderno é um ídolo. Ele promete segurança, mas entrega servidão. Promete justiça, mas entrega privilégios. Disfarça a opressão com linguagem técnica, legal e democrática. Mas por trás de suas máscaras, vejo apenas a velha serpente. Um parasita que se alimenta do trabalho alheio e manipula consciências para se perpetuar.
Resistir não é apenas um direito, é um dever. Obedecer a Deus antes que aos homens — essa é a minha regra. O poder se volta contra a verdade, mas minha lealdade pertence a quem criou o céu e a terra. A tirania não se combate com outro tirano, mas com a desobediência firme e pacífica dos que amam a justiça.
Não acredito em utopias. Desejo uma ordem natural, orgânica, enraizada no voluntarismo. Uma sociedade que se construa de baixo para cima: a partir da família, da comunidade local, da tradição e da fé. Não quero uma máquina que planeje a vida alheia, mas um tecido de relações voluntárias onde a liberdade floresça à sombra da cruz.
Desejo, sim, o reinado social de Cristo. Não por imposição, mas por convicção. Que Ele reine nos corações, nas famílias, nas ruas e nos contratos. Que a fé guie a razão e a razão ilumine a vida. Que a liberdade seja meio para a santidade — não um fim em si. E que, livres do jugo do Leviatã, sejamos servos apenas do Senhor.
-
@ 52b4a076:e7fad8bd
2025-04-28 00:48:57I have been recently building NFDB, a new relay DB. This post is meant as a short overview.
Regular relays have challenges
Current relay software have significant challenges, which I have experienced when hosting Nostr.land: - Scalability is only supported by adding full replicas, which does not scale to large relays. - Most relays use slow databases and are not optimized for large scale usage. - Search is near-impossible to implement on standard relays. - Privacy features such as NIP-42 are lacking. - Regular DB maintenance tasks on normal relays require extended downtime. - Fault-tolerance is implemented, if any, using a load balancer, which is limited. - Personalization and advanced filtering is not possible. - Local caching is not supported.
NFDB: A scalable database for large relays
NFDB is a new database meant for medium-large scale relays, built on FoundationDB that provides: - Near-unlimited scalability - Extended fault tolerance - Instant loading - Better search - Better personalization - and more.
Search
NFDB has extended search capabilities including: - Semantic search: Search for meaning, not words. - Interest-based search: Highlight content you care about. - Multi-faceted queries: Easily filter by topic, author group, keywords, and more at the same time. - Wide support for event kinds, including users, articles, etc.
Personalization
NFDB allows significant personalization: - Customized algorithms: Be your own algorithm. - Spam filtering: Filter content to your WoT, and use advanced spam filters. - Topic mutes: Mute topics, not keywords. - Media filtering: With Nostr.build, you will be able to filter NSFW and other content - Low data mode: Block notes that use high amounts of cellular data. - and more
Other
NFDB has support for many other features such as: - NIP-42: Protect your privacy with private drafts and DMs - Microrelays: Easily deploy your own personal microrelay - Containers: Dedicated, fast storage for discoverability events such as relay lists
Calcite: A local microrelay database
Calcite is a lightweight, local version of NFDB that is meant for microrelays and caching, meant for thousands of personal microrelays.
Calcite HA is an additional layer that allows live migration and relay failover in under 30 seconds, providing higher availability compared to current relays with greater simplicity. Calcite HA is enabled in all Calcite deployments.
For zero-downtime, NFDB is recommended.
Noswhere SmartCache
Relays are fixed in one location, but users can be anywhere.
Noswhere SmartCache is a CDN for relays that dynamically caches data on edge servers closest to you, allowing: - Multiple regions around the world - Improved throughput and performance - Faster loading times
routerd
routerd
is a custom load-balancer optimized for Nostr relays, integrated with SmartCache.routerd
is specifically integrated with NFDB and Calcite HA to provide fast failover and high performance.Ending notes
NFDB is planned to be deployed to Nostr.land in the coming weeks.
A lot more is to come. 👀️️️️️️
-
@ 62033ff8:e4471203
2025-01-11 15:00:24收录的内容中 kind=1的部分,实话说 质量不高。 所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。 而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。 我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
@ 23b0e2f8:d8af76fc
2025-01-08 18:17:52Necessário
- Um Android que você não use mais (a câmera deve estar funcionando).
- Um cartão microSD (opcional, usado apenas uma vez).
- Um dispositivo para acompanhar seus fundos (provavelmente você já tem um).
Algumas coisas que você precisa saber
- O dispositivo servirá como um assinador. Qualquer movimentação só será efetuada após ser assinada por ele.
- O cartão microSD será usado para transferir o APK do Electrum e garantir que o aparelho não terá contato com outras fontes de dados externas após sua formatação. Contudo, é possível usar um cabo USB para o mesmo propósito.
- A ideia é deixar sua chave privada em um dispositivo offline, que ficará desligado em 99% do tempo. Você poderá acompanhar seus fundos em outro dispositivo conectado à internet, como seu celular ou computador pessoal.
O tutorial será dividido em dois módulos:
- Módulo 1 - Criando uma carteira fria/assinador.
- Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
No final, teremos:
- Uma carteira fria que também servirá como assinador.
- Um dispositivo para acompanhar os fundos da carteira.
Módulo 1 - Criando uma carteira fria/assinador
-
Baixe o APK do Electrum na aba de downloads em https://electrum.org/. Fique à vontade para verificar as assinaturas do software, garantindo sua autenticidade.
-
Formate o cartão microSD e coloque o APK do Electrum nele. Caso não tenha um cartão microSD, pule este passo.
- Retire os chips e acessórios do aparelho que será usado como assinador, formate-o e aguarde a inicialização.
- Durante a inicialização, pule a etapa de conexão ao Wi-Fi e rejeite todas as solicitações de conexão. Após isso, você pode desinstalar aplicativos desnecessários, pois precisará apenas do Electrum. Certifique-se de que Wi-Fi, Bluetooth e dados móveis estejam desligados. Você também pode ativar o modo avião.\ (Curiosidade: algumas pessoas optam por abrir o aparelho e danificar a antena do Wi-Fi/Bluetooth, impossibilitando essas funcionalidades.)
- Insira o cartão microSD com o APK do Electrum no dispositivo e instale-o. Será necessário permitir instalações de fontes não oficiais.
- No Electrum, crie uma carteira padrão e gere suas palavras-chave (seed). Anote-as em um local seguro. Caso algo aconteça com seu assinador, essas palavras permitirão o acesso aos seus fundos novamente. (Aqui entra seu método pessoal de backup.)
Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
-
Criar uma carteira somente leitura em outro dispositivo, como seu celular ou computador pessoal, é uma etapa bastante simples. Para este tutorial, usaremos outro smartphone Android com Electrum. Instale o Electrum a partir da aba de downloads em https://electrum.org/ ou da própria Play Store. (ATENÇÃO: O Electrum não existe oficialmente para iPhone. Desconfie se encontrar algum.)
-
Após instalar o Electrum, crie uma carteira padrão, mas desta vez escolha a opção Usar uma chave mestra.
- Agora, no assinador que criamos no primeiro módulo, exporte sua chave pública: vá em Carteira > Detalhes da carteira > Compartilhar chave mestra pública.
-
Escaneie o QR gerado da chave pública com o dispositivo de consulta. Assim, ele poderá acompanhar seus fundos, mas sem permissão para movimentá-los.
-
Para receber fundos, envie Bitcoin para um dos endereços gerados pela sua carteira: Carteira > Addresses/Coins.
-
Para movimentar fundos, crie uma transação no dispositivo de consulta. Como ele não possui a chave privada, será necessário assiná-la com o dispositivo assinador.
- No assinador, escaneie a transação não assinada, confirme os detalhes, assine e compartilhe. Será gerado outro QR, desta vez com a transação já assinada.
- No dispositivo de consulta, escaneie o QR da transação assinada e transmita-a para a rede.
Conclusão
Pontos positivos do setup:
- Simplicidade: Basta um dispositivo Android antigo.
- Flexibilidade: Funciona como uma ótima carteira fria, ideal para holders.
Pontos negativos do setup:
- Padronização: Não utiliza seeds no padrão BIP-39, você sempre precisará usar o electrum.
- Interface: A aparência do Electrum pode parecer antiquada para alguns usuários.
Nesse ponto, temos uma carteira fria que também serve para assinar transações. O fluxo de assinar uma transação se torna: Gerar uma transação não assinada > Escanear o QR da transação não assinada > Conferir e assinar essa transação com o assinador > Gerar QR da transação assinada > Escanear a transação assinada com qualquer outro dispositivo que possa transmiti-la para a rede.
Como alguns devem saber, uma transação assinada de Bitcoin é praticamente impossível de ser fraudada. Em um cenário catastrófico, você pode mesmo que sem internet, repassar essa transação assinada para alguém que tenha acesso à rede por qualquer meio de comunicação. Mesmo que não queiramos que isso aconteça um dia, esse setup acaba por tornar essa prática possível.
-
@ 000002de:c05780a7
2025-05-09 18:33:06I've been eager to share my thoughts on "Return of the Strong Gods" by R.R. Reno since I finished it a week ago.
I found this book incredibly insightful. @SimpleStacker's excellent review prompted me to pick it up, and I'm glad I did. While I won't be providing a full review, I will share some of the key insights and thoughts that resonated with me.
Reno masterfully dissects the sociological underpinnings of the political shifts in the US and the Western world over the past decade and a half. His analysis of the rise of populism and nationalism is compelling and rings true to my personal observations. Reno's central thesis is that post-WW2, the West embraced "weaker gods"—ideals like democracy, pluralism, and liberalism—fearing a repeat of the strong nationalistic sentiments that led to Hitler's rise. He argues that this shift has left many people feeling culturally homeless and desperate for strong leadership that listens to their grievances.
A significant portion of Reno's argument is built on the ideas of Karl Popper, a philosopher I was unfamiliar with before reading this book. Popper's work "The Open Society and Its Enemies" has significantly influenced post-war consensus, which Reno argues has led to a disconnect between the political elite and the common people. This disconnect, he posits, is a primary driver of populist sentiments.
Reno also delves into economics, discussing Friedrich Hayek and his agreement with some of Popper's positions. This intersection of philosophy and economics provides a unique lens through which to view the political landscape. Reno's mention of the Treaty of Versailles as a catalyst for Hitler's rise is a point often overlooked in discussions about WW2. He argues that the punitive measures imposed on Germany created an environment ripe for a strongman to emerge.
One of the most compelling aspects of Reno's argument is his explanation of how the political elite often lose touch with the values and wishes of the people they represent. He draws a powerful analogy between cultural homelessness and the desperation that drives people to seek strong leadership. This section of the book particularly resonated with me, as it aligns with my own observations of the political climate.
I would have liked Reno to start his analysis with Woodrow Wilson, whose "making the world safe for democracy" slogan embodied a form of Christian nationalism. Wilson's ideals and the post-WW1 environment laid the groundwork for the open society movement, which Reno critiques. Exploring this historical context could have strengthened Reno's argument.
Reno occasionally conflates economic liberalism with libertinism, which I found to be a minor flaw in an otherwise strong argument. He rightly points out the need for moral ethics in society but seems to overlook the distinction between economic freedom and moral laxity. Reno's discussion of Milton Friedman's ideas further highlights this confusion. While Reno argues that free trade has contributed to many of our modern ills, I believe the issue lies more with nation-state trade agreements like NAFTA, which are not true examples of free trade.
Another area where I disagree with Reno is his conflation of the nation with the state. Nations are cultural entities that predate and can exist independently of states. Reno's argument would be stronger if he acknowledged this distinction, as it would clarify his points about national pride and cultural heritage.
Reno's final chapters offer a cautionary tale about the return of strong gods and the danger of making them idols. He warns against authoritarianism and the overreach of the state, advocating for a balance that respects cultural heritage without succumbing to nationalism.
In conclusion, "Return of the Strong Gods" is a thought-provoking exploration of the political and cultural shifts of our time. Reno's insights are valuable, and his arguments, while not without flaws, provide a fresh perspective on the rise of populism and nationalism. I recommend this book to anyone seeking a deeper understanding of the forces shaping our world today.
originally posted at https://stacker.news/items/975849
-
@ 30ceb64e:7f08bdf5
2025-04-26 20:33:30Status: Draft
Author: TheWildHustleAbstract
This NIP defines a framework for storing and sharing health and fitness profile data on Nostr. It establishes a set of standardized event kinds for individual health metrics, allowing applications to selectively access specific health information while preserving user control and privacy.
In this framework exists - NIP-101h.1 Weight using kind 1351 - NIP-101h.2 Height using kind 1352 - NIP-101h.3 Age using kind 1353 - NIP-101h.4 Gender using kind 1354 - NIP-101h.5 Fitness Level using kind 1355
Motivation
I want to build and support an ecosystem of health and fitness related nostr clients that have the ability to share and utilize a bunch of specific interoperable health metrics.
- Selective access - Applications can access only the data they need
- User control - Users can choose which metrics to share
- Interoperability - Different health applications can share data
- Privacy - Sensitive health information can be managed independently
Specification
Kind Number Range
Health profile metrics use the kind number range 1351-1399:
| Kind | Metric | | --------- | ---------------------------------- | | 1351 | Weight | | 1352 | Height | | 1353 | Age | | 1354 | Gender | | 1355 | Fitness Level | | 1356-1399 | Reserved for future health metrics |
Common Structure
All health metric events SHOULD follow these guidelines:
- The content field contains the primary value of the metric
- Required tags:
['t', 'health']
- For categorizing as health data['t', metric-specific-tag]
- For identifying the specific metric['unit', unit-of-measurement]
- When applicable- Optional tags:
['converted_value', value, unit]
- For providing alternative unit measurements['timestamp', ISO8601-date]
- When the metric was measured['source', application-name]
- The source of the measurement
Unit Handling
Health metrics often have multiple ways to be measured. To ensure interoperability:
- Where multiple units are possible, one standard unit SHOULD be chosen as canonical
- When using non-standard units, a
converted_value
tag SHOULD be included with the canonical unit - Both the original and converted values should be provided for maximum compatibility
Client Implementation Guidelines
Clients implementing this NIP SHOULD:
- Allow users to explicitly choose which metrics to publish
- Support reading health metrics from other users when appropriate permissions exist
- Support updating metrics with new values over time
- Preserve tags they don't understand for future compatibility
- Support at least the canonical unit for each metric
Extensions
New health metrics can be proposed as extensions to this NIP using the format:
- NIP-101h.X where X is the metric number
Each extension MUST specify: - A unique kind number in the range 1351-1399 - The content format and meaning - Required and optional tags - Examples of valid events
Privacy Considerations
Health data is sensitive personal information. Clients implementing this NIP SHOULD:
- Make it clear to users when health data is being published
- Consider incorporating NIP-44 encryption for sensitive metrics
- Allow users to selectively share metrics with specific individuals
- Provide easy ways to delete previously published health data
NIP-101h.1: Weight
Description
This NIP defines the format for storing and sharing weight data on Nostr.
Event Kind: 1351
Content
The content field MUST contain the numeric weight value as a string.
Required Tags
- ['unit', 'kg' or 'lb'] - Unit of measurement
- ['t', 'health'] - Categorization tag
- ['t', 'weight'] - Specific metric tag
Optional Tags
- ['converted_value', value, unit] - Provides the weight in alternative units for interoperability
- ['timestamp', ISO8601 date] - When the weight was measured
Examples
json { "kind": 1351, "content": "70", "tags": [ ["unit", "kg"], ["t", "health"], ["t", "weight"] ] }
json { "kind": 1351, "content": "154", "tags": [ ["unit", "lb"], ["t", "health"], ["t", "weight"], ["converted_value", "69.85", "kg"] ] }
NIP-101h.2: Height
Status: Draft
Description
This NIP defines the format for storing and sharing height data on Nostr.
Event Kind: 1352
Content
The content field can use two formats: - For metric height: A string containing the numeric height value in centimeters (cm) - For imperial height: A JSON string with feet and inches properties
Required Tags
['t', 'health']
- Categorization tag['t', 'height']
- Specific metric tag['unit', 'cm' or 'imperial']
- Unit of measurement
Optional Tags
['converted_value', value, 'cm']
- Provides height in centimeters for interoperability when imperial is used['timestamp', ISO8601-date]
- When the height was measured
Examples
```jsx // Example 1: Metric height Apply to App.jsx
// Example 2: Imperial height with conversion Apply to App.jsx ```
Implementation Notes
- Centimeters (cm) is the canonical unit for height interoperability
- When using imperial units, a conversion to centimeters SHOULD be provided
- Height values SHOULD be positive integers
- For maximum compatibility, clients SHOULD support both formats
NIP-101h.3: Age
Status: Draft
Description
This NIP defines the format for storing and sharing age data on Nostr.
Event Kind: 1353
Content
The content field MUST contain the numeric age value as a string.
Required Tags
['unit', 'years']
- Unit of measurement['t', 'health']
- Categorization tag['t', 'age']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the age was recorded['dob', ISO8601-date]
- Date of birth (if the user chooses to share it)
Examples
```jsx // Example 1: Basic age Apply to App.jsx
// Example 2: Age with DOB Apply to App.jsx ```
Implementation Notes
- Age SHOULD be represented as a positive integer
- For privacy reasons, date of birth (dob) is optional
- Clients SHOULD consider updating age automatically if date of birth is known
- Age can be a sensitive metric and clients may want to consider encrypting this data
NIP-101h.4: Gender
Status: Draft
Description
This NIP defines the format for storing and sharing gender data on Nostr.
Event Kind: 1354
Content
The content field contains a string representing the user's gender.
Required Tags
['t', 'health']
- Categorization tag['t', 'gender']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the gender was recorded['preferred_pronouns', string]
- User's preferred pronouns
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - male - female - non-binary - other - prefer-not-to-say
Examples
```jsx // Example 1: Basic gender Apply to App.jsx
// Example 2: Gender with pronouns Apply to App.jsx ```
Implementation Notes
- Clients SHOULD allow free-form input for gender
- For maximum compatibility, clients SHOULD support the common values
- Gender is a sensitive personal attribute and clients SHOULD consider appropriate privacy controls
- Applications focusing on health metrics should be respectful of gender diversity
NIP-101h.5: Fitness Level
Status: Draft
Description
This NIP defines the format for storing and sharing fitness level data on Nostr.
Event Kind: 1355
Content
The content field contains a string representing the user's fitness level.
Required Tags
['t', 'health']
- Categorization tag['t', 'fitness']
- Fitness category tag['t', 'level']
- Specific metric tag
Optional Tags
['timestamp', ISO8601-date]
- When the fitness level was recorded['activity', activity-type]
- Specific activity the fitness level relates to['metrics', JSON-string]
- Quantifiable fitness metrics used to determine level
Common Values
While any string value is permitted, the following common values are recommended for interoperability: - beginner - intermediate - advanced - elite - professional
Examples
```jsx // Example 1: Basic fitness level Apply to App.jsx
// Example 2: Activity-specific fitness level with metrics Apply to App.jsx ```
Implementation Notes
- Fitness level is subjective and may vary by activity
- The activity tag can be used to specify fitness level for different activities
- The metrics tag can provide objective measurements to support the fitness level
- Clients can extend this format to include activity-specific fitness assessments
- For general fitness apps, the simple beginner/intermediate/advanced scale is recommended
-
@ 91bea5cd:1df4451c
2025-04-26 10:16:21O Contexto Legal Brasileiro e o Consentimento
No ordenamento jurídico brasileiro, o consentimento do ofendido pode, em certas circunstâncias, afastar a ilicitude de um ato que, sem ele, configuraria crime (como lesão corporal leve, prevista no Art. 129 do Código Penal). Contudo, o consentimento tem limites claros: não é válido para bens jurídicos indisponíveis, como a vida, e sua eficácia é questionável em casos de lesões corporais graves ou gravíssimas.
A prática de BDSM consensual situa-se em uma zona complexa. Em tese, se ambos os parceiros são adultos, capazes, e consentiram livre e informadamente nos atos praticados, sem que resultem em lesões graves permanentes ou risco de morte não consentido, não haveria crime. O desafio reside na comprovação desse consentimento, especialmente se uma das partes, posteriormente, o negar ou alegar coação.
A Lei Maria da Penha (Lei nº 11.340/2006)
A Lei Maria da Penha é um marco fundamental na proteção da mulher contra a violência doméstica e familiar. Ela estabelece mecanismos para coibir e prevenir tal violência, definindo suas formas (física, psicológica, sexual, patrimonial e moral) e prevendo medidas protetivas de urgência.
Embora essencial, a aplicação da lei em contextos de BDSM pode ser delicada. Uma alegação de violência por parte da mulher, mesmo que as lesões ou situações decorram de práticas consensuais, tende a receber atenção prioritária das autoridades, dada a presunção de vulnerabilidade estabelecida pela lei. Isso pode criar um cenário onde o parceiro masculino enfrenta dificuldades significativas em demonstrar a natureza consensual dos atos, especialmente se não houver provas robustas pré-constituídas.
Outros riscos:
Lesão corporal grave ou gravíssima (art. 129, §§ 1º e 2º, CP), não pode ser justificada pelo consentimento, podendo ensejar persecução penal.
Crimes contra a dignidade sexual (arts. 213 e seguintes do CP) são de ação pública incondicionada e independem de representação da vítima para a investigação e denúncia.
Riscos de Falsas Acusações e Alegação de Coação Futura
Os riscos para os praticantes de BDSM, especialmente para o parceiro que assume o papel dominante ou que inflige dor/restrição (frequentemente, mas não exclusivamente, o homem), podem surgir de diversas frentes:
- Acusações Externas: Vizinhos, familiares ou amigos que desconhecem a natureza consensual do relacionamento podem interpretar sons, marcas ou comportamentos como sinais de abuso e denunciar às autoridades.
- Alegações Futuras da Parceira: Em caso de término conturbado, vingança, arrependimento ou mudança de perspectiva, a parceira pode reinterpretar as práticas passadas como abuso e buscar reparação ou retaliação através de uma denúncia. A alegação pode ser de que o consentimento nunca existiu ou foi viciado.
- Alegação de Coação: Uma das formas mais complexas de refutar é a alegação de que o consentimento foi obtido mediante coação (física, moral, psicológica ou econômica). A parceira pode alegar, por exemplo, que se sentia pressionada, intimidada ou dependente, e que seu "sim" não era genuíno. Provar a ausência de coação a posteriori é extremamente difícil.
- Ingenuidade e Vulnerabilidade Masculina: Muitos homens, confiando na dinâmica consensual e na parceira, podem negligenciar a necessidade de precauções. A crença de que "isso nunca aconteceria comigo" ou a falta de conhecimento sobre as implicações legais e o peso processual de uma acusação no âmbito da Lei Maria da Penha podem deixá-los vulneráveis. A presença de marcas físicas, mesmo que consentidas, pode ser usada como evidência de agressão, invertendo o ônus da prova na prática, ainda que não na teoria jurídica.
Estratégias de Prevenção e Mitigação
Não existe um método infalível para evitar completamente o risco de uma falsa acusação, mas diversas medidas podem ser adotadas para construir um histórico de consentimento e reduzir vulnerabilidades:
- Comunicação Explícita e Contínua: A base de qualquer prática BDSM segura é a comunicação constante. Negociar limites, desejos, palavras de segurança ("safewords") e expectativas antes, durante e depois das cenas é crucial. Manter registros dessas negociações (e-mails, mensagens, diários compartilhados) pode ser útil.
-
Documentação do Consentimento:
-
Contratos de Relacionamento/Cena: Embora a validade jurídica de "contratos BDSM" seja discutível no Brasil (não podem afastar normas de ordem pública), eles servem como forte evidência da intenção das partes, da negociação detalhada de limites e do consentimento informado. Devem ser claros, datados, assinados e, idealmente, reconhecidos em cartório (para prova de data e autenticidade das assinaturas).
-
Registros Audiovisuais: Gravar (com consentimento explícito para a gravação) discussões sobre consentimento e limites antes das cenas pode ser uma prova poderosa. Gravar as próprias cenas é mais complexo devido a questões de privacidade e potencial uso indevido, mas pode ser considerado em casos específicos, sempre com consentimento mútuo documentado para a gravação.
Importante: a gravação deve ser com ciência da outra parte, para não configurar violação da intimidade (art. 5º, X, da Constituição Federal e art. 20 do Código Civil).
-
-
Testemunhas: Em alguns contextos de comunidade BDSM, a presença de terceiros de confiança durante negociações ou mesmo cenas pode servir como testemunho, embora isso possa alterar a dinâmica íntima do casal.
- Estabelecimento Claro de Limites e Palavras de Segurança: Definir e respeitar rigorosamente os limites (o que é permitido, o que é proibido) e as palavras de segurança é fundamental. O desrespeito a uma palavra de segurança encerra o consentimento para aquele ato.
- Avaliação Contínua do Consentimento: O consentimento não é um cheque em branco; ele deve ser entusiástico, contínuo e revogável a qualquer momento. Verificar o bem-estar do parceiro durante a cena ("check-ins") é essencial.
- Discrição e Cuidado com Evidências Físicas: Ser discreto sobre a natureza do relacionamento pode evitar mal-entendidos externos. Após cenas que deixem marcas, é prudente que ambos os parceiros estejam cientes e de acordo, talvez documentando por fotos (com data) e uma nota sobre a consensualidade da prática que as gerou.
- Aconselhamento Jurídico Preventivo: Consultar um advogado especializado em direito de família e criminal, com sensibilidade para dinâmicas de relacionamento alternativas, pode fornecer orientação personalizada sobre as melhores formas de documentar o consentimento e entender os riscos legais específicos.
Observações Importantes
- Nenhuma documentação substitui a necessidade de consentimento real, livre, informado e contínuo.
- A lei brasileira protege a "integridade física" e a "dignidade humana". Práticas que resultem em lesões graves ou que violem a dignidade de forma não consentida (ou com consentimento viciado) serão ilegais, independentemente de qualquer acordo prévio.
- Em caso de acusação, a existência de documentação robusta de consentimento não garante a absolvição, mas fortalece significativamente a defesa, ajudando a demonstrar a natureza consensual da relação e das práticas.
-
A alegação de coação futura é particularmente difícil de prevenir apenas com documentos. Um histórico consistente de comunicação aberta (whatsapp/telegram/e-mails), respeito mútuo e ausência de dependência ou controle excessivo na relação pode ajudar a contextualizar a dinâmica como não coercitiva.
-
Cuidado com Marcas Visíveis e Lesões Graves Práticas que resultam em hematomas severos ou lesões podem ser interpretadas como agressão, mesmo que consentidas. Evitar excessos protege não apenas a integridade física, mas também evita questionamentos legais futuros.
O que vem a ser consentimento viciado
No Direito, consentimento viciado é quando a pessoa concorda com algo, mas a vontade dela não é livre ou plena — ou seja, o consentimento existe formalmente, mas é defeituoso por alguma razão.
O Código Civil brasileiro (art. 138 a 165) define várias formas de vício de consentimento. As principais são:
Erro: A pessoa se engana sobre o que está consentindo. (Ex.: A pessoa acredita que vai participar de um jogo leve, mas na verdade é exposta a práticas pesadas.)
Dolo: A pessoa é enganada propositalmente para aceitar algo. (Ex.: Alguém mente sobre o que vai acontecer durante a prática.)
Coação: A pessoa é forçada ou ameaçada a consentir. (Ex.: "Se você não aceitar, eu termino com você" — pressão emocional forte pode ser vista como coação.)
Estado de perigo ou lesão: A pessoa aceita algo em situação de necessidade extrema ou abuso de sua vulnerabilidade. (Ex.: Alguém em situação emocional muito fragilizada é induzida a aceitar práticas que normalmente recusaria.)
No contexto de BDSM, isso é ainda mais delicado: Mesmo que a pessoa tenha "assinado" um contrato ou dito "sim", se depois ela alegar que seu consentimento foi dado sob medo, engano ou pressão psicológica, o consentimento pode ser considerado viciado — e, portanto, juridicamente inválido.
Isso tem duas implicações sérias:
-
O crime não se descaracteriza: Se houver vício, o consentimento é ignorado e a prática pode ser tratada como crime normal (lesão corporal, estupro, tortura, etc.).
-
A prova do consentimento precisa ser sólida: Mostrando que a pessoa estava informada, lúcida, livre e sem qualquer tipo de coação.
Consentimento viciado é quando a pessoa concorda formalmente, mas de maneira enganada, forçada ou pressionada, tornando o consentimento inútil para efeitos jurídicos.
Conclusão
Casais que praticam BDSM consensual no Brasil navegam em um terreno que exige não apenas confiança mútua e comunicação excepcional, mas também uma consciência aguçada das complexidades legais e dos riscos de interpretações equivocadas ou acusações mal-intencionadas. Embora o BDSM seja uma expressão legítima da sexualidade humana, sua prática no Brasil exige responsabilidade redobrada. Ter provas claras de consentimento, manter a comunicação aberta e agir com prudência são formas eficazes de se proteger de falsas alegações e preservar a liberdade e a segurança de todos os envolvidos. Embora leis controversas como a Maria da Penha sejam "vitais" para a proteção contra a violência real, os praticantes de BDSM, e em particular os homens nesse contexto, devem adotar uma postura proativa e prudente para mitigar os riscos inerentes à potencial má interpretação ou instrumentalização dessas práticas e leis, garantindo que a expressão de sua consensualidade esteja resguardada na medida do possível.
Importante: No Brasil, mesmo com tudo isso, o Ministério Público pode denunciar por crime como lesão corporal grave, estupro ou tortura, independente de consentimento. Então a prudência nas práticas é fundamental.
Aviso Legal: Este artigo tem caráter meramente informativo e não constitui aconselhamento jurídico. As leis e interpretações podem mudar, e cada situação é única. Recomenda-se buscar orientação de um advogado qualificado para discutir casos específicos.
Se curtiu este artigo faça uma contribuição, se tiver algum ponto relevante para o artigo deixe seu comentário.
-
@ 6c05c73e:c4356f17
2025-05-10 01:50:20Porque investir R$100
Um guia politicamente diferente de investimento
Você já certamente se pegou na seguinte situação. Paguei tudo o que devia. E, agora me sobrou cem reais para investir.
Principalmente, se você é novo e está no seu primeiro emprego. Isso é muito comum. Contudo, mesmo os mais experimentados passam por isso.
No entanto, vem aquela pergunta na cabeça. Será que eu realmente preciso investir esses cem reais? Afinal, é uma quantia pequena e qual seria o retorno disso?
Contudo, para responder sua dúvida. A resposta é sempre sim. Se, algum dia na sua vida. Você quiser acumular recursos e escapar desse círculo vicioso. Precisa acumular e investir.
Todavia, vamos analisar nesse artigo alguns outros aspectos que podem elevar x10 esse valor investido.
Quando investir R$100?
Agora, amanhã ou ano que vem?
Muitos pensam que todo dinheiro ganho precisa IMEDIATAMENTE ser investido. E, na verdade isso não precisa ser assim.
Portanto, todo o dinheiro ganho deve ser aplicado com diligência. Decidir quando investir o dinheiro é uma tarefa crucial. Mas, antes disso acontecer. Você precisa definir seus objetivos.
Sobretudo, aonde você vai colocar seu dinheiro. Vai determinar o ritmo como investe. E, isso é crucial em uma jornada de investimentos.
Aonde?
**“Para manter uma fogueira, tem que botar mais lenha” ** Espera um pouco. O que essa frase quer dizer? Vamos criar uma situação hipotética aqui e analisar três formas de investimento:
Investir em ações na bolsa de valores (renda variável) Investir em si mesmo (aumentar capacidade)
Primeira hipótese:
Você investiu R$100 o ano todo. Todos os meses. Isso gerou um bruto de R$1.200. Você estudo e se dedicou. Analisou ações, assistiu a vários vídeos e análises. E fez 15% de retorno no ano.
Nessa primeira hipótese, você acumulou R$1.200 + 15% de retorno (R$180). O que te rendeu R$1380 brutos. Com essa quantia a incidência de impostos não existe. Então, sem DARFS por aqui.
Compensou investir 2 horas por dia * 200 dias no ano =1.752.000 min. Para poder ganhar R$180?
Dahhhh, não. Você ficou mais experiente e aprendeu. Mas, a quantia é muito pequena e não faz cócegas. Tampouco te faz sorrir. E te tira dessa situação.
Segunda hipótese:
Eu não sei qual o seu ofício nesse momento. Mas, eu sei de duas coisas. Ou você gosta e quer se especializar. Ou você quer trocar de ofício logo logo. Então, meu consagrado(a). Você já sabe…
Agora, vamos pegar esses R$100 mensais e investir em um curso técnico. Pois, é mais rápido que uma faculdade. Será seu primeiro diploma oficial, te torna alguém e te dá um ofício.
Assim sendo, você tem grandes chances de trocar de trabalho. Ter aumento salarial é certamente no prazo de um ano. Você vai ganhar muitos mais que os R$180 brutos de rentabilidade do primeiro caso.
Agora, quer saber a real porque ninguém te fala isso no YouTube, Instagram e TikTok?
Meu consagrado, as pessoas só te recomendam o que é bom para elas. Não há nada de errado com isso. O que quero te fazer enxergar aqui. É que você está sozinho (ou tem que estar, para refletir) e precisa escolher o que é melhor para você.
Contudo, é importante para para pensar por um tempo e entender o que vai ser melhor para você. Qual melhor caminho seguir. E ponto, só você sabe o que é melhor para você.
Como?
Para fechar nosso raciocínio de hoje. Vamos a pergunta de um milhão de dólares. Mas, como eu faço isso? Meu querido(a), você vai fazer um exercício comigo logo menos.
Você vai deixar seu celular no silencioso. Colocar um timer de 15 minutos. Pegar um papel e caneta e anotar em uma folha o que você gosta + o que você você gostaria de fazer da sua vida.
Logo, essa lista tem que conter o SEU gosto. Não do seu amigo, do influenciador, da celebridade ou do atleta. Essas pessoas tem influência sobre sua vida? Claro! Mas, cabe a você refletir sobre o que realmente quer.
Naturalmente, você não vai colocar lá na sua lista; quero uma Ferrari esse ano. Porque não é assim que funciona. Na real, você pode querer a Ferrari. Sem problema algum.
Mas, saiba que antes da Ferrari. Vai ter que vir o Gol, o Civic, a 325, o 911 e daí a Ferrari. Tudo na vida é passo meu caro(a). Ou você acha que vai dar saltos quânticos na vida sem estar preparado? Não é assim que a vida funciona.
Em suma,
para concluir esse texto. Vamos fazer o resumão prático do que fazer com cem reais mensais:
Tire um tempo para entender o que quer e o que tu gosta. Visualize e anote seus objetivos que quer em uma folha. Invista em você incansavelmente. Você vai chegar ao topo da montanha. Mas, tem que subir um nível por vez.
Se chegou até aqui é porque gostou e quer mudar de vida. Manda pro teu colega que tá precisando desse tapa de luva. Obrigado pelo seu tempo.
-
@ e516ecb8:1be0b167
2025-05-10 00:57:52Recentemente, o podcast Scicast apresentou um episódio sobre aposentadoria, descrevendo os diferentes sistemas e sua história. Acho que, embora tenha sido um episódio interessante, foi muito injusto com o sistema de financiamento privado, que, afinal de contas, para um podcast voltado para a ciência (e que mencionou várias vezes que o sistema de repartição era matematicamente inviável), achei que alguns pontos precisavam ser esclarecidos.
Não estou dizendo que o sistema chileno é perfeito, não é, mas ele tinha potencial para melhorar e a classe política só o tornou cada vez pior.
Por exemplo, Otto von Bismarck sabia que tornaria a população idosa (os poucos que atingiram a idade de aposentadoria) dependente do Estado, e foi isso que fez com que Franco adotasse o sistema na Espanha, onde hoje é preciso aumentar os impostos para manter artificialmente o sistema de repartição.
A rentabilidade é baixa no sistema chileno, porque há lacunas (meses sem pagar) e porque você só contribui com 10%. Ou seja, se seu salário for 100, mas você contribuir com 10 em janeiro e mais nenhum mês. No final da sua vida profissional (45 anos) você terá 450 mais os rendimentos 1350, e esse total dividido pela expectativa de vida que está cada vez mais alta: 20 anos (24 meses) dá um salário de 56,25. Muito baixo, porque para o cálculo o salário era 100. Ainda mais baixo se a inflação for levada em conta. Mas o fato de haver meses sem contribuições não é levado em conta.
Deve-se considerar na rentabilidade que, em média, para cada 1 peso contribuído, são obtidos 3 pesos.Según cálculos de comissões
Várias comissões foram criadas para estudar como melhorar o sistema no Chile, e todas concluem a mesma coisa:
1- contribuir com 10% é muito baixo em comparação com outros sistemas
2- não se deve tolerar lacunas (a idade de aposentadoria não deve ser a idade natural, mas os anos de contribuição)
3- deve haver ajustes para cada aumento na expectativa de vida.
Esses três pontos acima não são tocados pelos políticos, porque são impopulares e eles vivem para a próxima eleição. Assim, eles preferem buscar outras formas, como o PGU (o salário básico universal que você já mencionou no podcast), que matematicamente já foi calculado como inviável no longo prazo.
Considere que 10% é baixo, mas ainda assim o Chile está muito bem classificado e poderia melhorar se os políticos tivessem a coragem de assumir os resultados das comissões que eles mesmos solicitaram.
https://www.65ymas.com/economia/pensiones/10-paises-con-mejores-sistemas-pensiones_22627_102.html
Foi mencionada a complexidade dos investimentos, o que é um mito, pois há fundos em que as contribuições são feitas de acordo com a idade do trabalhador (Fundo A para jovens, mais rentável, mas mais arriscado, e Fundo E, menos rentável, mas também menos arriscado). Na crise, as pessoas que perderam muito dinheiro foram porque estavam em um fundo que não correspondia à sua idade. Se você for jovem, poderá recuperar os retornos em um fundo de risco. Mas se for idoso na época da crise e não estiver em um fundo estável, já terá perdido sem poder se recuperar antes dos 65 anos.
Além disso, não se considera que há também um fundo voluntário adicional para o qual cada pessoa pode contribuir (APV), que tem benefícios como contribuições anuais do Estado para recompensar aqueles que contribuem para ele, e que também paga retornos.
Por outro lado, foi possível retirar dinheiro dos fundos individuais durante a crise pandêmica, algo que seria impossível em um fundo de repartição em que cada unidade monetária depositada por um trabalhador ativo é gasta ao mesmo tempo por um aposentado.
O modelo de repartição militar no Chile exige 20 anos de contribuições sem intervalos, qualquer intervalo e a pensão é de US$ 0, algo que não acontece no sistema civil (e não poderia acontecer porque o dinheiro lá é seu).
O sistema de capitalização individual no Chile permite que os trabalhadores possuam ativos de investimento, ativos de empresas que ajudaram o país a se desenvolver, algo que os políticos que querem derrubar o sistema parecem não entender.
E quanto à inflação? Investir em ativos que geram renda é útil tanto para mitigar a inflação quanto para ser proprietário, e esse último aspecto será relevante quando a automação chegar. Em vez de manter um registro artificial de quanto um robô produz, é mais transparente lucrar com os dividendos produzidos pela empresa em questão.
O sistema nórdico foi mencionado (que, afinal de contas, é praticamente o único sistema europeu que não está à beira do colapso). Vale a pena considerar que o dinheiro que a Noruega ganha com o petróleo vai para um fundo de investimento semelhante ao que temos individualmente no Chile, e a partir dele é paga grande parte do estado de bem-estar social norueguês. Mas isso é algo que parece muito distante, com a classe política latino-americana mais inclinada a desperdiçá-lo em bobagens populistas.
O modelo de distribuição é piramidal, embora possa parecer uma narrativa perigosa, é bom que as pessoas o vejam dessa forma (se um cidadão tirasse dinheiro dos jovens para dar aos idosos, provavelmente acabaria preso por um esquema ponzi), e a taxa de natalidade cada vez menor é um problema que o Estado não pode resolver, por mais totalitário que seja (estímulos fiscais e monetários etc. foram tentados com resultados ruins e míopes).
Além disso, por mais demonizado que seja na narrativa, o sistema chileno é um sistema misto com um pilar de solidariedade para aqueles que não contribuíram bem.
Na Argentina, passou-se da capitalização individual para o sistema de repartição, e o dinheiro foi usado pelos políticos quando não podiam pagar a conta de um Estado cada vez maior e corrupto, deixando os aposentados à mercê das migalhas dadas pelos políticos em cada campanha eleitoral. Você não confia no mercado, eu não confio no Estado.
-
@ 207ad2a0:e7cca7b0
2025-01-07 03:46:04Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my free to read/share attempt to write a novel, but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
and after a number of outputs, selected this one to work on:
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
@ e6817453:b0ac3c39
2025-01-05 14:29:17The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
- Enhance LLM Outputs: By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
- Support Complex Queries: Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
- Richness of Entities: Effective retrieval depends on graphs populated with rich, detailed entities.
- Freedom from Hallucinations: Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
- Handle diverse data sources with ease.
- Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
- Defining Relationships: Clarifying how concepts connect within a domain.
- Validating Graph Structures: Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
- Assign a Knowledge Champion: Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
- Provide Training: Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
- Leverage Partnerships: Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
- Utilize Tools: Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
- Iterate with Feedback: Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
-
Leverage Existing Ontologies: Many industries and domains already have open ontologies. For instance:
-
Public Knowledge Graphs: Resources like Wikipedia’s graph offer a wealth of structured knowledge.
- Industry Standards: Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
-
Business Framework Ontology (BFO): A valuable resource for enterprises looking to define business processes and structures.
-
Build In-House Expertise: If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
-
Utilize LLMs for Ontology Construction: Interestingly, LLMs themselves can act as a starting point for ontology development:
-
Prompt-Based Extraction: LLMs can generate draft ontologies by leveraging their extensive training on graph data.
- Domain Expert Refinement: Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
- Detecting Hallucinations: Differentiating between genuine insights and AI-generated inaccuracies.
- Ensuring Completeness: Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
- Generate Skeleton Ontologies: Prompt LLMs with domain-specific information to draft initial ontology structures.
- Validate and Refine Ontologies: Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
@ 8125b911:a8400883
2025-04-25 07:02:35In Nostr, all data is stored as events. Decentralization is achieved by storing events on multiple relays, with signatures proving the ownership of these events. However, if you truly want to own your events, you should run your own relay to store them. Otherwise, if the relays you use fail or intentionally delete your events, you'll lose them forever.
For most people, running a relay is complex and costly. To solve this issue, I developed nostr-relay-tray, a relay that can be easily run on a personal computer and accessed over the internet.
Project URL: https://github.com/CodyTseng/nostr-relay-tray
This article will guide you through using nostr-relay-tray to run your own relay.
Download
Download the installation package for your operating system from the GitHub Release Page.
| Operating System | File Format | | --------------------- | ---------------------------------- | | Windows |
nostr-relay-tray.Setup.x.x.x.exe
| | macOS (Apple Silicon) |nostr-relay-tray-x.x.x-arm64.dmg
| | macOS (Intel) |nostr-relay-tray-x.x.x.dmg
| | Linux | You should know which one to use |Installation
Since this app isn’t signed, you may encounter some obstacles during installation. Once installed, an ostrich icon will appear in the status bar. Click on the ostrich icon, and you'll see a menu where you can click the "Dashboard" option to open the relay's control panel for further configuration.
macOS Users:
- On first launch, go to "System Preferences > Security & Privacy" and click "Open Anyway."
- If you encounter a "damaged" message, run the following command in the terminal to remove the restrictions:
bash sudo xattr -rd com.apple.quarantine /Applications/nostr-relay-tray.app
Windows Users:
- On the security warning screen, click "More Info > Run Anyway."
Connecting
By default, nostr-relay-tray is only accessible locally through
ws://localhost:4869/
, which makes it quite limited. Therefore, we need to expose it to the internet.In the control panel, click the "Proxy" tab and toggle the switch. You will then receive a "Public address" that you can use to access your relay from anywhere. It's that simple.
Next, add this address to your relay list and position it as high as possible in the list. Most clients prioritize connecting to relays that appear at the top of the list, and relays lower in the list are often ignored.
Restrictions
Next, we need to set up some restrictions to prevent the relay from storing events that are irrelevant to you and wasting storage space. nostr-relay-tray allows for flexible and fine-grained configuration of which events to accept, but some of this is more complex and will not be covered here. If you're interested, you can explore this further later.
For now, I'll introduce a simple and effective strategy: WoT (Web of Trust). You can enable this feature in the "WoT & PoW" tab. Before enabling, you'll need to input your pubkey.
There's another important parameter,
Depth
, which represents the relationship depth between you and others. Someone you follow has a depth of 1, someone they follow has a depth of 2, and so on.- Setting this parameter to 0 means your relay will only accept your own events.
- Setting it to 1 means your relay will accept events from you and the people you follow.
- Setting it to 2 means your relay will accept events from you, the people you follow, and the people they follow.
Currently, the maximum value for this parameter is 2.
Conclusion
You've now successfully run your own relay and set a simple restriction to prevent it from storing irrelevant events.
If you encounter any issues during use, feel free to submit an issue on GitHub, and I'll respond as soon as possible.
Not your relay, not your events.
-
@ 5d4b6c8d:8a1c1ee3
2025-05-09 13:25:40The second round of the NBA Playoffs is historically weird. We'll try to figure out what's going wrong for all three of the presumed contenders. Which of the other East teams need to be considered contenders now? With how bad the Warriors look without Steph (just how underrated is he?), do the Warriors still have any chance?
In Blok'd Shots, we'll talk about the NHL Star who got traded midseason and then beat his former team in the playoffs.
There was a big trade in the NFL. Plus, @grayruby wants to start a media beef with Colin Cowherd.
The MLB introduced a new stat. I have no idea what it is, but I'm looking forward to trying to understand it live on air.
And, of course, lots of contest and betting updates.
What do you want us to talk about?
originally posted at https://stacker.news/items/975474
-
@ a4a6b584:1e05b95b
2025-01-02 18:13:31The Four-Layer Framework
Layer 1: Zoom Out
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
Example: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- Tip: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
Once you have the big picture in mind, it’s time to start breaking it down.
Layer 2: Categorize and Connect
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
Example: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- Tip: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
With your categories in place, you’re ready to dive into the details that bring them to life.
Layer 3: Master the Details
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
Example: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- Tip: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
Now that you’ve mastered the essentials, you can expand your knowledge to include extra material.
Layer 4: Expand Your Horizons
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
Example: Learn about rare programming quirks or historical trivia about a language’s development.
- Tip: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
Pro Tips for Better Learning
1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
2. Map It Out
Create visual aids like diagrams or concept maps to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching exposes any gaps in your understanding and helps reinforce the material.
4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to explore your topic in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.Tip: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
@ e691f4df:1099ad65
2025-04-24 18:56:12Viewing Bitcoin Through the Light of Awakening
Ankh & Ohm Capital’s Overview of the Psycho-Spiritual Nature of Bitcoin
Glossary:
I. Preface: The Logos of Our Logo
II. An Oracular Introduction
III. Alchemizing Greed
IV. Layers of Fractalized Thought
V. Permissionless Individuation
VI. Dispelling Paradox Through Resonance
VII. Ego Deflation
VIII. The Coin of Great Price
Preface: The Logos of Our Logo
Before we offer our lens on Bitcoin, it’s important to illuminate the meaning behind Ankh & Ohm’s name and symbol. These elements are not ornamental—they are foundational, expressing the cosmological principles that guide our work.
Our mission is to bridge the eternal with the practical. As a Bitcoin-focused family office and consulting firm, we understand capital not as an end, but as a tool—one that, when properly aligned, becomes a vehicle for divine order. We see Bitcoin not simply as a technological innovation but as an emanation of the Divine Logos—a harmonic expression of truth, transparency, and incorruptible structure. Both the beginning and the end, the Alpha and Omega.
The Ankh (☥), an ancient symbol of eternal life, is a key to the integration of opposites. It unites spirit and matter, force and form, continuity and change. It reminds us that capital, like Life, must not only be generative, but regenerative; sacred. Money must serve Life, not siphon from it.
The Ohm (Ω) holds a dual meaning. In physics, it denotes a unit of electrical resistance—the formative tension that gives energy coherence. In the Vedic tradition, Om (ॐ) is the primordial vibration—the sound from which all existence unfolds. Together, these symbols affirm a timeless truth: resistance and resonance are both sacred instruments of the Creator.
Ankh & Ohm, then, represents our striving for union, for harmony —between the flow of life and intentional structure, between incalculable abundance and measured restraint, between the lightbulb’s electrical impulse and its light-emitting filament. We stand at the threshold where intention becomes action, and where capital is not extracted, but cultivated in rhythm with the cosmos.
We exist to shepherd this transformation, as guides of this threshold —helping families, founders, and institutions align with a deeper order, where capital serves not as the prize, but as a pathway to collective Presence, Purpose, Peace and Prosperity.
An Oracular Introduction
Bitcoin is commonly understood as the first truly decentralized and secure form of digital money—a breakthrough in monetary sovereignty. But this view, while technically correct, is incomplete and spiritually shallow. Bitcoin is more than a tool for economic disruption. Bitcoin represents a mythic threshold: a symbol of the psycho-spiritual shift that many ancient traditions have long foretold.
For millennia, sages and seers have spoken of a coming Golden Age. In the Vedic Yuga cycles, in Plato’s Great Year, in the Eagle and Condor prophecies of the Americas—there exists a common thread: that humanity will emerge from darkness into a time of harmony, cooperation, and clarity. That the veil of illusion (maya, materiality) will thin, and reality will once again become transparent to the transcendent. In such an age, systems based on scarcity, deception, and centralization fall away. A new cosmology takes root—one grounded in balance, coherence, and sacred reciprocity.
But we must ask—how does such a shift happen? How do we cross from the age of scarcity, fear, and domination into one of coherence, abundance, and freedom?
One possible answer lies in the alchemy of incentive.
Bitcoin operates not just on the rules of computer science or Austrian economics, but on something far more old and subtle: the logic of transformation. It transmutes greed—a base instinct rooted in scarcity—into cooperation, transparency, and incorruptibility.
In this light, Bitcoin becomes more than code—it becomes a psychoactive protocol, one that rewires human behavior by aligning individual gain with collective integrity. It is not simply a new form of money. It is a new myth of value. A new operating system for human consciousness.
Bitcoin does not moralize. It harmonizes. It transforms the instinct for self-preservation into a pathway for planetary coherence.
Alchemizing Greed
At the heart of Bitcoin lies the ancient alchemical principle of transmutation: that which is base may be refined into gold.
Greed, long condemned as a vice, is not inherently evil. It is a distorted longing. A warped echo of the drive to preserve life. But in systems built on scarcity and deception, this longing calcifies into hoarding, corruption, and decay.
Bitcoin introduces a new game. A game with memory. A game that makes deception inefficient and truth profitable. It does not demand virtue—it encodes consequence. Its design does not suppress greed; it reprograms it.
In traditional models, game theory often illustrates the fragility of trust. The Prisoner’s Dilemma reveals how self-interest can sabotage collective well-being. But Bitcoin inverts this. It creates an environment where self-interest and integrity converge—where the most rational action is also the most truthful.
Its ledger, immutable and transparent, exposes manipulation for what it is: energetically wasteful and economically self-defeating. Dishonesty burns energy and yields nothing. The network punishes incoherence, not by decree, but by natural law.
This is the spiritual elegance of Bitcoin: it does not suppress greed—it transmutes it. It channels the drive for personal gain into the architecture of collective order. Miners compete not to dominate, but to validate. Nodes collaborate not through trust, but through mathematical proof.
This is not austerity. It is alchemy.
Greed, under Bitcoin, is refined. Tempered. Re-forged into a generative force—no longer parasitic, but harmonic.
Layers of Fractalized Thought Fragments
All living systems are layered. So is the cosmos. So is the human being. So is a musical scale.
At its foundation lies the timechain—the pulsing, incorruptible record of truth. Like the heart, it beats steadily. Every block, like a pulse, affirms its life through continuity. The difficulty adjustment—Bitcoin’s internal calibration—functions like heart rate variability, adapting to pressure while preserving coherence.
Above this base layer is the Lightning Network—a second layer facilitating rapid, efficient transactions. It is the nervous system: transmitting energy, reducing latency, enabling real-time interaction across a distributed whole.
Beyond that, emerging tools like Fedimint and Cashu function like the capillaries—bringing vitality to the extremities, to those underserved by legacy systems. They empower the unbanked, the overlooked, the forgotten. Privacy and dignity in the palms of those the old system refused to see.
And then there is NOSTR—the decentralized protocol for communication and creation. It is the throat chakra, the vocal cords of the “freedom-tech” body. It reclaims speech from the algorithmic overlords, making expression sovereign once more. It is also the reproductive system, as it enables the propagation of novel ideas and protocols in fertile, uncensorable soil.
Each layer plays its part. Not in hierarchy, but in harmony. In holarchy. Bitcoin and other open source protocols grow not through exogenous command, but through endogenous coherence. Like cells in an organism. Like a song.
Imagine the cell as a piece of glass from a shattered holographic plate —by which its perspectival, moving image can be restructured from the single shard. DNA isn’t only a logical script of base pairs, but an evolving progressive song. Its lyrics imbued with wise reflections on relationships. The nucleus sings, the cell responds—not by command, but by memory. Life is not imposed; it is expressed. A reflection of a hidden pattern.
Bitcoin chants this. Each node, a living cell, holds the full timechain—Truth distributed, incorruptible. Remove one, and the whole remains. This isn’t redundancy. It’s a revelation on the power of protection in Truth.
Consensus is communion. Verification becomes a sacred rite—Truth made audible through math.
Not just the signal; the song. A web of self-expression woven from Truth.
No center, yet every point alive with the whole. Like Indra’s Net, each reflects all. This is more than currency and information exchange. It is memory; a self-remembering Mind, unfolding through consensus and code. A Mind reflecting the Truth of reality at the speed of thought.
Heuristics are mental shortcuts—efficient, imperfect, alive. Like cells, they must adapt or decay. To become unbiased is to have self-balancing heuristics which carry feedback loops within them: they listen to the environment, mutate when needed, and survive by resonance with reality. Mutation is not error, but evolution. Its rules are simple, but their expression is dynamic.
What persists is not rigidity, but pattern.
To think clearly is not necessarily to be certain, but to dissolve doubt by listening, adjusting, and evolving thought itself.
To understand Bitcoin is simply to listen—patiently, clearly, as one would to a familiar rhythm returning.
Permissionless Individuation
Bitcoin is a path. One that no one can walk for you.
Said differently, it is not a passive act. It cannot be spoon-fed. Like a spiritual path, it demands initiation, effort, and the willingness to question inherited beliefs.
Because Bitcoin is permissionless, no one can be forced to adopt it. One must choose to engage it—compelled by need, interest, or intuition. Each person who embarks undergoes their own version of the hero’s journey.
Carl Jung called this process Individuation—the reconciliation of fragmented psychic elements into a coherent, mature Self. Bitcoin mirrors this: it invites individuals to confront the unconscious assumptions of the fiat paradigm, and to re-integrate their relationship to time, value, and agency.
In Western traditions—alchemy, Christianity, Kabbalah—the individual is sacred, and salvation is personal. In Eastern systems—Daoism, Buddhism, the Vedas—the self is ultimately dissolved into the cosmic whole. Bitcoin, in a paradoxical way, echoes both: it empowers the individual, while aligning them with a holistic, transcendent order.
To truly see Bitcoin is to allow something false to die. A belief. A habit. A self-concept.
In that death—a space opens for deeper connection with the Divine itSelf.
In that dissolution, something luminous is reborn.
After the passing, Truth becomes resurrected.
Dispelling Paradox Through Resonance
There is a subtle paradox encoded into the hero’s journey: each starts in solidarity, yet the awakening affects the collective.
No one can be forced into understanding Bitcoin. Like a spiritual truth, it must be seen. And yet, once seen, it becomes nearly impossible to unsee—and easier for others to glimpse. The pattern catches.
This phenomenon mirrors the concept of morphic resonance, as proposed and empirically tested by biologist Rupert Sheldrake. Once a critical mass of individuals begins to embody a new behavior or awareness, it becomes easier—instinctive—for others to follow suit. Like the proverbial hundredth monkey who begins to wash the fruit in the sea water, and suddenly, monkeys across islands begin doing the same—without ever meeting.
When enough individuals embody a pattern, it ripples outward. Not through propaganda, but through field effect and wave propagation. It becomes accessible, instinctive, familiar—even across great distance.
Bitcoin spreads in this way. Not through centralized broadcast, but through subtle resonance. Each new node, each individual who integrates the protocol into their life, strengthens the signal for others. The protocol doesn’t shout; it hums, oscillates and vibrates——persistently, coherently, patiently.
One awakens. Another follows. The current builds. What was fringe becomes familiar. What was radical becomes obvious.
This is the sacred geometry of spiritual awakening. One awakens, another follows, and soon the fluidic current is strong enough to carry the rest. One becomes two, two become many, and eventually the many become One again. This tessellation reverberates through the human aura, not as ideology, but as perceivable pattern recognition.
Bitcoin’s most powerful marketing tool is truth. Its most compelling evangelist is reality. Its most unstoppable force is resonance.
Therefore, Bitcoin is not just financial infrastructure—it is psychic scaffolding. It is part of the subtle architecture through which new patterns of coherence ripple across the collective field.
The training wheels from which humanity learns to embody Peace and Prosperity.
Ego Deflation
The process of awakening is not linear, and its beginning is rarely gentle—it usually begins with disruption, with ego inflation and destruction.
To individuate is to shape a center; to recognize peripherals and create boundaries—to say, “I am.” But without integration, the ego tilts—collapsing into void or inflating into noise. Fiat reflects this pathology: scarcity hoarded, abundance simulated. Stagnation becomes disguised as safety, and inflation masquerades as growth.
In other words, to become whole, the ego must first rise—claiming agency, autonomy, and identity. However, when left unbalanced, it inflates, or implodes. It forgets its context. It begins to consume rather than connect. And so the process must reverse: what inflates must deflate.
In the fiat paradigm, this inflation is literal. More is printed, and ethos is diluted. Savings decay. Meaning erodes. Value is abstracted. The economy becomes bloated with inaudible noise. And like the psyche that refuses to confront its own shadow, it begins to collapse under the weight of its own illusions.
But under Bitcoin, time is honored. Value is preserved. Energy is not abstracted but grounded.
Bitcoin is inherently deflationary—in both economic and spiritual senses. With a fixed supply, it reveals what is truly scarce. Not money, not status—but the finite number of heartbeats we each carry.
To see Bitcoin is to feel that limit in one’s soul. To hold Bitcoin is to feel Time’s weight again. To sense the importance of Bitcoin is to feel the value of preserved, potential energy. It is to confront the reality that what matters cannot be printed, inflated, or faked. In this way, Bitcoin gently confronts the ego—not through punishment, but through clarity.
Deflation, rightly understood, is not collapse—it is refinement. It strips away illusion, bloat, and excess. It restores the clarity of essence.
Spiritually, this is liberation.
The Coin of Great Price
There is an ancient parable told by a wise man:
“The kingdom of heaven is like a merchant seeking fine pearls, who, upon finding one of great price, sold all he had and bought it.”
Bitcoin is such a pearl.
But the ledger is more than a chest full of treasure. It is a key to the heart of things.
It is not just software—it is sacrament.
A symbol of what cannot be corrupted. A mirror of divine order etched into code. A map back to the sacred center.
It reflects what endures. It encodes what cannot be falsified. It remembers what we forgot: that Truth, when aligned with form, becomes Light once again.
Its design is not arbitrary. It speaks the language of life itself—
The elliptic orbits of the planets mirrored in its cryptography,
The logarithmic spiral of the nautilus shell discloses its adoption rate,
The interconnectivity of mycelium in soil reflect the network of nodes in cyberspace,
A webbed breadth of neurons across synaptic space fires with each new confirmed transaction.
It is geometry in devotion. Stillness in motion.
It is the Logos clothed in protocol.
What this key unlocks is beyond external riches. It is the eternal gold within us.
Clarity. Sovereignty. The unshakeable knowing that what is real cannot be taken. That what is sacred was never for sale.
Bitcoin is not the destination.
It is the Path.
And we—when we are willing to see it—are the Temple it leads back to.
-
@ 40b9c85f:5e61b451
2025-04-24 15:27:02Introduction
Data Vending Machines (DVMs) have emerged as a crucial component of the Nostr ecosystem, offering specialized computational services to clients across the network. As defined in NIP-90, DVMs operate on an apparently simple principle: "data in, data out." They provide a marketplace for data processing where users request specific jobs (like text translation, content recommendation, or AI text generation)
While DVMs have gained significant traction, the current specification faces challenges that hinder widespread adoption and consistent implementation. This article explores some ideas on how we can apply the reflection pattern, a well established approach in RPC systems, to address these challenges and improve the DVM ecosystem's clarity, consistency, and usability.
The Current State of DVMs: Challenges and Limitations
The NIP-90 specification provides a broad framework for DVMs, but this flexibility has led to several issues:
1. Inconsistent Implementation
As noted by hzrd149 in "DVMs were a mistake" every DVM implementation tends to expect inputs in slightly different formats, even while ostensibly following the same specification. For example, a translation request DVM might expect an event ID in one particular format, while an LLM service could expect a "prompt" input that's not even specified in NIP-90.
2. Fragmented Specifications
The DVM specification reserves a range of event kinds (5000-6000), each meant for different types of computational jobs. While creating sub-specifications for each job type is being explored as a possible solution for clarity, in a decentralized and permissionless landscape like Nostr, relying solely on specification enforcement won't be effective for creating a healthy ecosystem. A more comprehensible approach is needed that works with, rather than against, the open nature of the protocol.
3. Ambiguous API Interfaces
There's no standardized way for clients to discover what parameters a specific DVM accepts, which are required versus optional, or what output format to expect. This creates uncertainty and forces developers to rely on documentation outside the protocol itself, if such documentation exists at all.
The Reflection Pattern: A Solution from RPC Systems
The reflection pattern in RPC systems offers a compelling solution to many of these challenges. At its core, reflection enables servers to provide metadata about their available services, methods, and data types at runtime, allowing clients to dynamically discover and interact with the server's API.
In established RPC frameworks like gRPC, reflection serves as a self-describing mechanism where services expose their interface definitions and requirements. In MCP reflection is used to expose the capabilities of the server, such as tools, resources, and prompts. Clients can learn about available capabilities without prior knowledge, and systems can adapt to changes without requiring rebuilds or redeployments. This standardized introspection creates a unified way to query service metadata, making tools like
grpcurl
possible without requiring precompiled stubs.How Reflection Could Transform the DVM Specification
By incorporating reflection principles into the DVM specification, we could create a more coherent and predictable ecosystem. DVMs already implement some sort of reflection through the use of 'nip90params', which allow clients to discover some parameters, constraints, and features of the DVMs, such as whether they accept encryption, nutzaps, etc. However, this approach could be expanded to provide more comprehensive self-description capabilities.
1. Defined Lifecycle Phases
Similar to the Model Context Protocol (MCP), DVMs could benefit from a clear lifecycle consisting of an initialization phase and an operation phase. During initialization, the client and DVM would negotiate capabilities and exchange metadata, with the DVM providing a JSON schema containing its input requirements. nip-89 (or other) announcements can be used to bootstrap the discovery and negotiation process by providing the input schema directly. Then, during the operation phase, the client would interact with the DVM according to the negotiated schema and parameters.
2. Schema-Based Interactions
Rather than relying on rigid specifications for each job type, DVMs could self-advertise their schemas. This would allow clients to understand which parameters are required versus optional, what type validation should occur for inputs, what output formats to expect, and what payment flows are supported. By internalizing the input schema of the DVMs they wish to consume, clients gain clarity on how to interact effectively.
3. Capability Negotiation
Capability negotiation would enable DVMs to advertise their supported features, such as encryption methods, payment options, or specialized functionalities. This would allow clients to adjust their interaction approach based on the specific capabilities of each DVM they encounter.
Implementation Approach
While building DVMCP, I realized that the RPC reflection pattern used there could be beneficial for constructing DVMs in general. Since DVMs already follow an RPC style for their operation, and reflection is a natural extension of this approach, it could significantly enhance and clarify the DVM specification.
A reflection enhanced DVM protocol could work as follows: 1. Discovery: Clients discover DVMs through existing NIP-89 application handlers, input schemas could also be advertised in nip-89 announcements, making the second step unnecessary. 2. Schema Request: Clients request the DVM's input schema for the specific job type they're interested in 3. Validation: Clients validate their request against the provided schema before submission 4. Operation: The job proceeds through the standard NIP-90 flow, but with clearer expectations on both sides
Parallels with Other Protocols
This approach has proven successful in other contexts. The Model Context Protocol (MCP) implements a similar lifecycle with capability negotiation during initialization, allowing any client to communicate with any server as long as they adhere to the base protocol. MCP and DVM protocols share fundamental similarities, both aim to expose and consume computational resources through a JSON-RPC-like interface, albeit with specific differences.
gRPC's reflection service similarly allows clients to discover service definitions at runtime, enabling generic tools to work with any gRPC service without prior knowledge. In the REST API world, OpenAPI/Swagger specifications document interfaces in a way that makes them discoverable and testable.
DVMs would benefit from adopting these patterns while maintaining the decentralized, permissionless nature of Nostr.
Conclusion
I am not attempting to rewrite the DVM specification; rather, explore some ideas that could help the ecosystem improve incrementally, reducing fragmentation and making the ecosystem more comprehensible. By allowing DVMs to self describe their interfaces, we could maintain the flexibility that makes Nostr powerful while providing the structure needed for interoperability.
For developers building DVM clients or libraries, this approach would simplify consumption by providing clear expectations about inputs and outputs. For DVM operators, it would establish a standard way to communicate their service's requirements without relying on external documentation.
I am currently developing DVMCP following these patterns. Of course, DVMs and MCP servers have different details; MCP includes capabilities such as tools, resources, and prompts on the server side, as well as 'roots' and 'sampling' on the client side, creating a bidirectional way to consume capabilities. In contrast, DVMs typically function similarly to MCP tools, where you call a DVM with an input and receive an output, with each job type representing a different categorization of the work performed.
Without further ado, I hope this article has provided some insight into the potential benefits of applying the reflection pattern to the DVM specification.
-
@ fe32298e:20516265
2024-12-16 20:59:13Today I learned how to install NVapi to monitor my GPUs in Home Assistant.
NVApi is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- GPU Utilization Monitoring: Utilization, memory usage, temperature, fan speed, and power consumption.
- Automated Power Limiting: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- Cross-GPU Coordination: Total power budget applies across multiple GPUs in the same system.
- PCIe Link Speed Management: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- Home Assistant Integration: Uses the built-in RESTful platform and template sensors.
Getting the Data
sudo apt install golang-go git clone https://github.com/sammcj/NVApi.git cd NVapi go run main.go -port 9999 -rate 1 curl http://localhost:9999/gpu
Response for a single GPU:
[ { "index": 0, "name": "NVIDIA GeForce RTX 4090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 16, "power_limit_watts": 450, "memory_total_gb": 23.99, "memory_used_gb": 0.46, "memory_free_gb": 23.52, "memory_usage_percent": 2, "temperature": 38, "processes": [], "pcie_link_state": "not managed" } ]
Response for multiple GPUs:
[ { "index": 0, "name": "NVIDIA GeForce RTX 3090", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 14, "power_limit_watts": 350, "memory_total_gb": 24, "memory_used_gb": 0.43, "memory_free_gb": 23.57, "memory_usage_percent": 2, "temperature": 36, "processes": [], "pcie_link_state": "not managed" }, { "index": 1, "name": "NVIDIA RTX A4000", "gpu_utilisation": 0, "memory_utilisation": 0, "power_watts": 10, "power_limit_watts": 140, "memory_total_gb": 15.99, "memory_used_gb": 0.56, "memory_free_gb": 15.43, "memory_usage_percent": 3, "temperature": 41, "processes": [], "pcie_link_state": "not managed" } ]
Start at Boot
Create
/etc/systemd/system/nvapi.service
:``` [Unit] Description=Run NVapi After=network.target
[Service] Type=simple Environment="GOPATH=/home/ansible/go" WorkingDirectory=/home/ansible/NVapi ExecStart=/usr/bin/go run main.go -port 9999 -rate 1 Restart=always User=ansible
Environment="GPU_TEMP_CHECK_INTERVAL=5"
Environment="GPU_TOTAL_POWER_CAP=400"
Environment="GPU_0_LOW_TEMP=40"
Environment="GPU_0_MEDIUM_TEMP=70"
Environment="GPU_0_LOW_TEMP_LIMIT=135"
Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
Environment="GPU_0_HIGH_TEMP_LIMIT=100"
Environment="GPU_1_LOW_TEMP=45"
Environment="GPU_1_MEDIUM_TEMP=75"
Environment="GPU_1_LOW_TEMP_LIMIT=140"
Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install] WantedBy=multi-user.target ```
Home Assistant
Add to Home Assistant
configuration.yaml
and restart HA (completely).For a single GPU, this works: ``` sensor: - platform: rest name: MYPC GPU Information resource: http://mypc:9999 method: GET headers: Content-Type: application/json value_template: "{{ value_json[0].index }}" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature scan_interval: 1 # seconds
- platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature" value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}" unit_of_measurement: "°C" ```
For multiple GPUs: ``` rest: scan_interval: 1 resource: http://mypc:9999 sensor: - name: "MYPC GPU0 Information" value_template: "{{ value_json[0].index }}" json_attributes_path: "$.0" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature - name: "MYPC GPU1 Information" value_template: "{{ value_json[1].index }}" json_attributes_path: "$.1" json_attributes: - name - gpu_utilisation - memory_utilisation - power_watts - power_limit_watts - memory_total_gb - memory_used_gb - memory_free_gb - memory_usage_percent - temperature
-
platform: template sensors: mypc_gpu_0_gpu: friendly_name: "MYPC GPU0 GPU" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_memory: friendly_name: "MYPC GPU0 Memory" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_0_power: friendly_name: "MYPC GPU0 Power" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_0_power_limit: friendly_name: "MYPC GPU0 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_0_temperature: friendly_name: "MYPC GPU0 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}" unit_of_measurement: "C"
-
platform: template sensors: mypc_gpu_1_gpu: friendly_name: "MYPC GPU1 GPU" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_memory: friendly_name: "MYPC GPU1 Memory" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}" unit_of_measurement: "%" mypc_gpu_1_power: friendly_name: "MYPC GPU1 Power" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}" unit_of_measurement: "W" mypc_gpu_1_power_limit: friendly_name: "MYPC GPU1 Power Limit" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}" unit_of_measurement: "W" mypc_gpu_1_temperature: friendly_name: "MYPC GPU1 Temperature" value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}" unit_of_measurement: "C"
```
Basic entity card:
type: entities entities: - entity: sensor.mypc_gpu_0_gpu secondary_info: last-updated - entity: sensor.mypc_gpu_0_memory secondary_info: last-updated - entity: sensor.mypc_gpu_0_power secondary_info: last-updated - entity: sensor.mypc_gpu_0_power_limit secondary_info: last-updated - entity: sensor.mypc_gpu_0_temperature secondary_info: last-updated
Ansible Role
```
-
name: install go become: true package: name: golang-go state: present
-
name: git clone git: repo: "https://github.com/sammcj/NVApi.git" dest: "/home/ansible/NVapi" update: yes force: true
go run main.go -port 9999 -rate 1
-
name: install systemd service become: true copy: src: nvapi.service dest: /etc/systemd/system/nvapi.service
-
name: Reload systemd daemons, enable, and restart nvapi become: true systemd: name: nvapi daemon_reload: yes enabled: yes state: restarted ```
-
@ 5d4b6c8d:8a1c1ee3
2025-05-08 22:57:55Using a discussion style post to get better formatting on the nostr cross-post.
The Fed Leaves Fed Funds Rate at 4.5% as Economic Storm Clouds Gather
by Ryan McMaken
The Fed is now hemmed in by a rising risk of stagflation. It doesn’t know where the economy is headed, or is unwilling to take a position. At this point, “hope for the best” is Fed policy.
originally posted at https://stacker.news/items/975059
-
@ 000002de:c05780a7
2025-05-08 16:14:37Just an observation that makes me chuckle.
originally posted at https://stacker.news/items/974829
-
@ b99efe77:f3de3616
2025-05-09 21:16:29🚦Traffic Light Control System🚦111111111
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ b99efe77:f3de3616
2025-05-09 19:55:12🚦Traffic Light Control System🚦
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ 6f6b50bb:a848e5a1
2024-12-15 15:09:52Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
-
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
-
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\ https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
-
@ 000002de:c05780a7
2025-05-08 14:55:34There are so many projects in the bitcoin space that deserve praise but I just wanna shout out Cashu.me. Its a browser based cashu (eCash / Lighting) wallet. It can be very handy when you need a wallet but don't wanna download yet another app. I hadn't used it in a very long time and decided to try it out again the other day. Its really well done.
As with other Cashu wallets you need to select a mint and backup your key phrase but it is very simple the get started.
If you wanna learn more about Cashu check out Cashu.space
Two other good wallets that support Cashu.
originally posted at https://stacker.news/items/974759
-
@ 6e64b83c:94102ee8
2025-04-23 20:23:34How to Run Your Own Nostr Relay on Android with Cloudflare Domain
Prerequisites
- Install Citrine on your Android device:
- Visit https://github.com/greenart7c3/Citrine/releases
- Download the latest release using:
- zap.store
- Obtainium
- F-Droid
- Or download the APK directly
-
Note: You may need to enable "Install from Unknown Sources" in your Android settings
-
Domain Requirements:
- Purchase a domain if you don't have one
-
Transfer your domain to Cloudflare if it's not already there (for free SSL certificates and cloudflared support)
-
Tools to use:
- nak (the nostr army knife):
- Download from https://github.com/fiatjaf/nak/releases
- Installation steps:
-
For Linux/macOS: ```bash # Download the appropriate version for your system wget https://github.com/fiatjaf/nak/releases/latest/download/nak-linux-amd64 # for Linux # or wget https://github.com/fiatjaf/nak/releases/latest/download/nak-darwin-amd64 # for macOS
# Make it executable chmod +x nak-*
# Move to a directory in your PATH sudo mv nak-* /usr/local/bin/nak
- For Windows:
batch # Download the Windows version curl -L -o nak.exe https://github.com/fiatjaf/nak/releases/latest/download/nak-windows-amd64.exe# Move to a directory in your PATH (e.g., C:\Windows) move nak.exe C:\Windows\nak.exe
- Verify installation:
bash nak --version ```
Setting Up Citrine
- Open the Citrine app
- Start the server
- You'll see it running on
ws://127.0.0.1:4869
(local network only) - Go to settings and paste your npub into "Accept events signed by" inbox and press the + button. This prevents others from publishing events to your personal relay.
Installing Required Tools
- Install Termux from Google Play Store
- Open Termux and run:
bash pkg update && pkg install wget wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb dpkg -i cloudflared-linux-arm64.deb
Cloudflare Authentication
- Run the authentication command:
bash cloudflared tunnel login
- Follow the instructions:
- Copy the provided URL to your browser
- Log in to your Cloudflare account
- If the URL expires, copy it again after logging in
Creating the Tunnel
- Create a new tunnel:
bash cloudflared tunnel create <TUNNEL_NAME>
- Choose any name you prefer for your tunnel
-
Copy the tunnel ID after creating the tunnel
-
Create and configure the tunnel config:
bash touch ~/.cloudflared/config.yml nano ~/.cloudflared/config.yml
-
Add this configuration (replace the placeholders with your values): ```yaml tunnel:
credentials-file: /data/data/com.termux/files/home/.cloudflared/ .json ingress: - hostname: nostr.yourdomain.com service: ws://localhost:4869
- service: http_status:404 ```
- Note: In nano editor:
CTRL+O
and Enter to saveCTRL+X
to exit
-
Note: Check the credentials file path in the logs
-
Validate your configuration:
bash cloudflared tunnel validate
-
Start the tunnel:
bash cloudflared tunnel run my-relay
Preventing Android from Killing the Tunnel
Run these commands to maintain tunnel stability:
bash date && apt install termux-tools && termux-setup-storage && termux-wake-lock echo "nameserver 1.1.1.1" > $PREFIX/etc/resolv.conf
Tip: You can open multiple Termux sessions by swiping from the left edge of the screen while keeping your tunnel process running.
Updating Your Outbox Model Relays
Once your relay is running and accessible via your domain, you'll want to update your relay list in the Nostr network. This ensures other clients know about your relay and can connect to it.
Decoding npub (Public Key)
Private keys (nsec) and public keys (npub) are encoded in bech32 format, which includes: - A prefix (like nsec1, npub1 etc.) - The encoded data - A checksum
This format makes keys: - Easy to distinguish - Hard to copy incorrectly
However, most tools require these keys in hexadecimal (hex) format.
To decode an npub string to its hex format:
bash nak decode nostr:npub1dejts0qlva8mqzjlrxqkc2tmvs2t7elszky5upxaf3jha9qs9m5q605uc4
Change it with your own npub.
bash { "pubkey": "6e64b83c1f674fb00a5f19816c297b6414bf67f015894e04dd4c657e94102ee8" }
Copy the pubkey value in quotes.
Create a kind 10002 event with your relay list:
- Include your new relay with write permissions
- Include other relays you want to read from and write to, omit 3rd parameter to make it both read and write
Example format:
json { "kind": 10002, "tags": [ ["r", "wss://your-relay-domain.com", "write"], ["r", "wss://eden.nostr.land/"], ["r", "wss://nos.lol/"], ["r", "wss://nostr.bitcoiner.social/"], ["r", "wss://nostr.mom/"], ["r", "wss://relay.primal.net/"], ["r", "wss://nostr.wine/", "read"], ["r", "wss://relay.damus.io/"], ["r", "wss://relay.nostr.band/"], ["r", "wss://relay.snort.social/"] ], "content": "" }
Save it to a file called
event.json
Note: Add or remove any relays you want. To check your existing 10002 relays: - Visit https://nostr.band/?q=by%3Anpub1dejts0qlva8mqzjlrxqkc2tmvs2t7elszky5upxaf3jha9qs9m5q605uc4+++kind%3A10002 - nostr.band is an indexing service, it probably has your relay list. - Replace
npub1xxx
in the URL with your own npub - Click "VIEW JSON" from the menu to see the raw event - Or use thenak
tool if you know the relaysbash nak req -k 10002 -a <your-pubkey> wss://relay1.com wss://relay2.com
Replace `<your-pubkey>` with your public key in hex format (you can get it using `nak decode <your-npub>`)
- Sign and publish the event:
- Use a Nostr client that supports kind 10002 events
- Or use the
nak
command-line tool:bash nak event --sec ncryptsec1... wss://relay1.com wss://relay2.com $(cat event.json)
Important Security Notes: 1. Never share your nsec (private key) with anyone 2. Consider using NIP-49 encrypted keys for better security 3. Never paste your nsec or private key into the terminal. The command will be saved in your shell history, exposing your private key. To clear the command history: - For bash: use
history -c
- For zsh: usefc -W
to write history to file, thenfc -p
to read it back - Or manually edit your shell history file (e.g.,~/.zsh_history
or~/.bash_history
) 4. if you're usingzsh
, usefc -p
to prevent the next command from being saved to history 5. Or temporarily disable history before running sensitive commands:bash unset HISTFILE nak key encrypt ... set HISTFILE
How to securely create NIP-49 encypted private key
```bash
Read your private key (input will be hidden)
read -s SECRET
Read your password (input will be hidden)
read -s PASSWORD
encrypt command
echo "$SECRET" | nak key encrypt "$PASSWORD"
copy and paste the ncryptsec1 text from the output
read -s ENCRYPTED nak key decrypt "$ENCRYPTED"
clear variables from memory
unset SECRET PASSWORD ENCRYPTED ```
On a Windows command line, to read from stdin and use the variables in
nak
commands, you can use a combination ofset /p
to read input and then use those variables in your command. Here's an example:```bash @echo off set /p "SECRET=Enter your secret key: " set /p "PASSWORD=Enter your password: "
echo %SECRET%| nak key encrypt %PASSWORD%
:: Clear the sensitive variables set "SECRET=" set "PASSWORD=" ```
If your key starts with
ncryptsec1
, thenak
tool will securely prompt you for a password when using the--sec
parameter, unless the command is used with a pipe< >
or|
.bash nak event --sec ncryptsec1... wss://relay1.com wss://relay2.com $(cat event.json)
- Verify the event was published:
- Check if your relay list is visible on other relays
-
Use the
nak
tool to fetch your kind 10002 events:bash nak req -k 10002 -a <your-pubkey> wss://relay1.com wss://relay2.com
-
Testing your relay:
- Try connecting to your relay using different Nostr clients
- Verify you can both read from and write to your relay
- Check if events are being properly stored and retrieved
- Tip: Use multiple Nostr clients to test different aspects of your relay
Note: If anyone in the community has a more efficient method of doing things like updating outbox relays, please share your insights in the comments. Your expertise would be greatly appreciated!
-
@ f32184ee:6d1c17bf
2025-04-23 13:21:52Ads Fueling Freedom
Ross Ulbricht’s "Decentralize Social Media" painted a picture of a user-centric, decentralized future that transcended the limitations of platforms like the tech giants of today. Though focused on social media, his concept provided a blueprint for decentralized content systems writ large. The PROMO Protocol, designed by NextBlock while participating in Sovereign Engineering, embodies this blueprint in the realm of advertising, leveraging Nostr and Bitcoin’s Lightning Network to give individuals control, foster a multi-provider ecosystem, and ensure secure value exchange. In this way, Ulbricht’s 2021 vision can be seen as a prescient prediction of the PROMO Protocol’s structure. This is a testament to the enduring power of his ideas, now finding form in NextBlock’s innovative approach.
[Current Platform-Centric Paradigm, source: Ross Ulbricht's Decentralize Social Media]
Ulbricht’s Vision: A Decentralized Social Protocol
In his 2021 Medium article Ulbricht proposed a revolutionary vision for a decentralized social protocol (DSP) to address the inherent flaws of centralized social media platforms, such as privacy violations and inconsistent content moderation. Writing from prison, Ulbricht argued that decentralization could empower users by giving them control over their own content and the value they create, while replacing single, monolithic platforms with a competitive ecosystem of interface providers, content servers, and advertisers. Though his focus was on social media, Ulbricht’s ideas laid a conceptual foundation that strikingly predicts the structure of NextBlock’s PROMO Protocol, a decentralized advertising system built on the Nostr protocol.
[A Decentralized Social Protocol (DSP), source: Ross Ulbricht's Decentralize Social Media]
Ulbricht’s Principles
Ulbricht’s article outlines several key principles for his DSP: * User Control: Users should own their content and dictate how their data and creations generate value, rather than being subject to the whims of centralized corporations. * Decentralized Infrastructure: Instead of a single platform, multiple interface providers, content hosts, and advertisers interoperate, fostering competition and resilience. * Privacy and Autonomy: Decentralized solutions for profile management, hosting, and interactions would protect user privacy and reduce reliance on unaccountable intermediaries. * Value Creation: Users, not platforms, should capture the economic benefits of their contributions, supported by decentralized mechanisms for transactions.
These ideas were forward-thinking in 2021, envisioning a shift away from the centralized giants dominating social media at the time. While Ulbricht didn’t specifically address advertising protocols, his framework for decentralization and user empowerment extends naturally to other domains, like NextBlock’s open-source offering: the PROMO Protocol.
NextBlock’s Implementation of PROMO Protocol
The PROMO Protocol powers NextBlock's Billboard app, a decentralized advertising protocol built on Nostr, a simple, open protocol for decentralized communication. The PROMO Protocol reimagines advertising by: * Empowering People: Individuals set their own ad prices (e.g., 500 sats/minute), giving them direct control over how their attention or space is monetized. * Marketplace Dynamics: Advertisers set budgets and maximum bids, competing within a decentralized system where a 20% service fee ensures operational sustainability. * Open-Source Flexibility: As an open-source protocol, it allows multiple developers to create interfaces or apps on top of it, avoiding the single-platform bottleneck Ulbricht critiqued. * Secure Payments: Using Strike Integration with Bitcoin Lightning Network, NextBlock enables bot-resistant and intermediary-free transactions, aligning value transfer with each person's control.
This structure decentralizes advertising in a way that mirrors Ulbricht’s broader vision for social systems, with aligned principles showing a specific use case: monetizing attention on Nostr.
Aligned Principles
Ulbricht’s 2021 article didn’t explicitly predict the PROMO Protocol, but its foundational concepts align remarkably well with NextBlock's implementation the protocol’s design: * Autonomy Over Value: Ulbricht argued that users should control their content and its economic benefits. In the PROMO Protocol, people dictate ad pricing, directly capturing the value of their participation. Whether it’s their time, influence, or digital space, rather than ceding it to a centralized ad network. * Ecosystem of Providers: Ulbricht envisioned multiple providers replacing a single platform. The PROMO Protocol’s open-source nature invites a similar diversity: anyone can build interfaces or tools on top of it, creating a competitive, decentralized advertising ecosystem rather than a walled garden. * Decentralized Transactions: Ulbricht’s DSP implied decentralized mechanisms for value exchange. NextBlock delivers this through the Bitcoin Lightning Network, ensuring that payments for ads are secure, instantaneous and final, a practical realization of Ulbricht’s call for user-controlled value flows. * Privacy and Control: While Ulbricht emphasized privacy in social interactions, the PROMO Protocol is public by default. Individuals are fully aware of all data that they generate since all Nostr messages are signed. All participants interact directly via Nostr.
[Blueprint Match, source NextBlock]
Who We Are
NextBlock is a US-based new media company reimagining digital ads for a decentralized future. Our founders, software and strategy experts, were hobbyist podcasters struggling to promote their work online without gaming the system. That sparked an idea: using new tech like Nostr and Bitcoin to build a decentralized attention market for people who value control and businesses seeking real connections.
Our first product, Billboard, is launching this June.
Open for All
Our model’s open-source! Check out the PROMO Protocol, built for promotion and attention trading. Anyone can join this decentralized ad network. Run your own billboard or use ours. This is a growing ecosystem for a new ad economy.
Our Vision
NextBlock wants to help build a new decentralized internet. Our revolutionary and transparent business model will bring honest revenue to companies hosting valuable digital spaces. Together, we will discover what our attention is really worth.
Read our Manifesto to learn more.
NextBlock is registered in Texas, USA.
-
@ 5d4b6c8d:8a1c1ee3
2025-05-08 01:22:05I've been thinking about how Predyx and other lightning based prediction markets might finance their operations, without undermining their core function of eliciting information from people.
The standard approach, of offering less-than-fair odds, guarantees long-run profitability (as long as you have enough customers), but it also creates a friction for participants that reduces the information value of their transactions. So, what are some less frictiony options for generating revenue?
Low hanging fruit
- Close markets in real-time: Rather than prespecifying a closing time for some markets, like sports, it's better to close the market at the moment the outcome is realized. This both prevents post hoc transactions and enables late stage transactions. This should be easily automatable (I say as someone with no idea how to do that), with the right resolution criteria.
- Round off shares: Shares and sats are discrete, so just make sure any necessary rounding is always in the house's favor.
- Set initial probabilities well: Use whatever external information is available to open markets as near to the "right" value as possible.
- Arbitrage: whenever markets are related to each other, make sure to resolve any illogical odds automatically
The point of these four is to avoid giving away free sats. None of them reduce productive use of the market. Keeping markets open up until the outcome is realized will probably greatly increase the number of transactions, since that's usually when the most information is coming in.
Third party support
- Ads are the most obvious form of third party revenue
- Sponsorships are the more interesting one: Allow sponsors to boost a market's visibility. This is similar to advertising, but it also capitalizes on the possibility of a market being of particular interest to someone.
- Charge for market creation: users should be able to create new markets (this will also enhance trade quantity and site traffic), but it should be costly to create a market. If prediction markets really provide higher quality information, then it's reasonable to charge for it.
- Arbitrage: Monitor external odds and whenever a gain can be locked in, place the bets (buy the shares) that guarantee a gain.
Bitcoin stuff
- Routing fees: The volume of sats moving into, out of, and being held in these markets will require a fairly large lightning node. Following some helpful tips to optimize fee revenue will generate some sats for logistical stuff that had to be done anyway.
- Treasury strategy: Take out loans against the revenue generated from all of the above and buy bitcoin: NGU -> repay with a fraction of the bitcoin, NGD -> repay with site revenue.
Bitcoin and Lightning Competitive Advantages
These aren't revenue ideas. They're just a couple of advantages lightning and bitcoin provide over fiat that should allow charging lower spreads than a traditional prediction market or sportsbook.
Traditional betting or prediction platforms are earning depreciating fiat, while a bitcoin based platform earns appreciating bitcoin. Traditional spreads must therefor be larger, in order to pull in the same real return. This also means the users' odds are worse on fiat platforms (again in real terms), even if the listed odds are the same, because their winnings will have depreciated by the time they receive them. Technically, this opens an opportunity to charge even higher spreads, but as mentioned in the intro, that would be bad for the information purposes of the market.
Lightning has much lower transactions costs than fiat transactions. So, even with tighter spreads, a lightning platform can net a better (nominal) return per transaction.
@mega_dreamer, I imagine most of those ideas were already on y'all's radar, and obviously you're already doing some, but I wanted to get them out of my head and onto digital paper. Hopefully, some of this will provide some useful food for thought.
originally posted at https://stacker.news/items/974372
-
@ 000002de:c05780a7
2025-05-06 20:24:08https://www.youtube.com/watch?v=CIMZH7DEPPQ
I really enjoy listening to non-technical people talk about technology when they get the bigger picture impacts and how it relates to our humanity.
I was reminded of this video by @k00b's post about an AI generated video of a victim forgiving his killer.
Piper says, "Computers are better at words than you. Than I". But they are machines. They cannot feel. They cannot have emotion.
This people honors me with their lips, but their heart is far from me
~ Matthew 15:8
Most of us hate it when people are fake with us. When they say things they don't mean. When they say things just to get something they want from us. Yet, we are quickly falling into this same trap with technology. Accepting it as real and human. I'm not suggesting we can't use technology but we have to be careful that we do not fall into this mechanical trap and forget what makes humans special.
We are emotional and spiritual beings. Though AI didn't exist during the times Jesus walked the earth read the verse above in a broader context.
Then Pharisees and scribes came to Jesus from Jerusalem and said, “Why do your disciples break the tradition of the elders? For they do not wash their hands when they eat.” He answered them, “And why do you break the commandment of God for the sake of your tradition? For God commanded, ‘Honor your father and your mother,’ and, ‘Whoever reviles father or mother must surely die.’ But you say, ‘If anyone tells his father or his mother, “What you would have gained from me is given to God,” he need not honor his father.’ So for the sake of your tradition you have made void the word of God. You hypocrites! Well did Isaiah prophesy of you, when he said:
“‘This people honors me with their lips,
but their heart is far from me;
in vain do they worship me,
teaching as doctrines the commandments of men.’”
Empty words. Words without meaning because they are not from a pure desire and love. You may not be a Christian but don't miss the significance of this. There is a value in being real. Sharing true emotion and heart. Don't fall into the trap of the culture of lies that surrounds us. I would rather hear true words with mistakes and less eloquence any day over something fake. I would rather share a real moment with the ones I love than a million fake moments. Embrace the messy imperfect but real world.
originally posted at https://stacker.news/items/973324
-
@ 2b24a1fa:17750f64
2025-05-09 19:50:20Wer sein eigenes Geld abheben möchte, macht sich heute – in Spanien - verdächtig. Wer dort eine größere Geldmenge des eigenen Vermögens abzuheben gedenkt, muss das von nun an Tage zuvor anmelden. Diese neue Regelung lässt sich auch nicht dadurch umgehen, dass man mehrere kleine Einzelbeiträge abhebt. Und die, die die neue Regelung missachten, werden empfindlich bestraft. So gerät jeder, der zu häufig Bares abhebt, in das Visier der Behörden.
https://soundcloud.com/radiomuenchen/barzahler-unter-generalverdacht-von-norbert-haring?
Was sich in Spanien an Bankautomaten und -schaltern eingeschlichen hat, könnte sich seinen Weg auch nach Deutschland bahnen. In Frankreich, so zeigt die persönliche Erfahrung, variiert die zu erzielende Geldmenge am Automaten unter noch ungeklärten Bedingungen von Tag zu Tag, von Konto zu Konto. Der Automat gibt vor, ob gerade beispielsweise 60, 200 oder 400 Euro abgehoben werden dürfen.
Hören Sie Norbert Härings Text zum spanischen Szenario der den Titel „Barzahler unter Generalverdacht“ trägt und zunächst auf seinem Blog erschienen war. norberthaering.de/news/spanien-bargeld/
Sprecher: Karsten Troyke
-
@ 5d4b6c8d:8a1c1ee3
2025-05-06 19:49:39One of the best first rounds in recent memory just concluded. Let's recap our playoff contests.
Bracket Challenge
In our joint contest with Global Sports Central, @WeAreAllSatoshi is leading the way with 85 points, while me and some nostr jabroni are tied for second with 80 points.
The bad news is that they are slightly ahead of us, with an average score of 62 to our 60.8. We need to go back in time and make less stupid picks.
Points Challenge
With the Warriors victory, I jumped into a commanding lead over @grayruby. LA sure let most of you down. I say you hold @realBitcoinDog responsible for his beloved hometown's failures.
I still need @Car and @Coinsreporter to make their picks for this round. The only matchup they can choose from is Warriors (7) @ Timberwolves (6). Lucky for them, that's probably the best one to choose from.
| Stacker | Points | |---------|--------| | @Undisciplined | 25| | @grayruby | 24| | @Coinsreporter | 19 | | @BlokchainB | 19| | @Carresan | 18 | | @gnilma | 18 | | @WeAreAllSatoshi | 12 | | @fishious | 11 | | @Car | 1 |
originally posted at https://stacker.news/items/973284
-
@ e6817453:b0ac3c39
2024-12-07 15:06:43I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.Graph Basics
- A standard graph has a set of nodes (or vertices) and edges (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
Hypergraph
- A hypergraph extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a hyperedge, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
Metagraph
- A metagraph allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to youNodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
Directed Metagraph where edges are not used as nodes and could not contain subgraphs
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
Directed Metagraph where edges are not used as nodes and could contain subgraphs
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose. -
@ b99efe77:f3de3616
2025-05-09 19:49:54🚦Traffic Light Control System🚦
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ b99efe77:f3de3616
2025-05-09 19:49:08🚦Traffic Light Control System🚦
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ e6817453:b0ac3c39
2024-12-07 15:03:06Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like Lisp emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
Lisp: The Earlier AI-Language
Lisp, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
Prolog: Formal Logic and Deductive Reasoning
In the 1970s, Prolog was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out how to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
Solvers and Their Role in Complementing LLMs
One of the most powerful features of Prolog and similar logic-based systems is their use of solvers. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a solver can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling constraint satisfaction problems — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating neurosymbolic methods that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a graph of rules. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a graph of rules, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable modular reasoning, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of neural fluency with rule-based structure gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform deductive reasoning alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where neurosymbolic approaches shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
Personal Knowledge Graphs (PKGs) come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a graph of rules that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using deductive databases to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling knowledge graphs using deductive databases, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
Prolog and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a graph of rules that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
Why It Matters for LLM Employment
Using neurosymbolic RAG (retrieval-augmented generation) in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of neural fluency with symbolic accuracy and deductive power is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
@ b6dcdddf:dfee5ee7
2025-05-06 15:58:23You can now fund projects on Geyser using Credit Cards, Apple Pay, Bank Transfers, and more.
The best part: 🧾 You pay in fiat and ⚡️ the creator receives Bitcoin.
You heard it right! Let's dive in 👇
First, how does it work? For contributors, it's easy! Once the project creator has verified their identity, anyone can contribute with fiat methods. Simply go through the usual contribution flow and select 'Pay with Fiat'. The first contribution is KYC-free.
Why does this matter? 1. Many Bitcoiners don't want to spend their Bitcoin: 👉 Number go up (NgU) 👉 Capital gains taxes With fiat contributions, there's no more excuse to contribute towards Bitcoin builders and creators! 2. Non-bitcoin holders want to support projects too. If someone loves your mission but only has a debit card, they used to be stuck. Now? They can back your Bitcoin project with familiar fiat tools. Now, they can do it all through Geyser!
So, why swap fiat into Bitcoin? Because Bitcoin is borderless. Fiat payouts are limited to certain countries, banks, and red tape. By auto-swapping fiat to Bitcoin, we ensure: 🌍 Instant payouts to creators all around the world ⚡️ No delays or restrictions 💥 Every contribution is also a silent Bitcoin buy
How to enable Fiat contributions If you’re a creator, it’s easy: - Go to your Dashboard → Wallet - Click “Enable Fiat Contributions” - Complete a quick ID verification (required by our payment provider) ✅ That’s it — your project is now open to global fiat supporters.
Supporting Bitcoin adoption At Geyser, our mission is to empower Bitcoin creators and builders. Adding fiat options amplifies our mission. It brings more people into the ecosystem while staying true to what we believe: ⚒️ Build on Bitcoin 🌱 Fund impactful initiatives 🌎 Enable global participation
**Support projects with fiat now! ** We've compiled a list of projects that currently have fiat contributions enabled. If you've been on the fence to support them because you didn't want to spend your Bitcoin, now's the time to do your first contribution!
Education - Citadel Dispatch: https://geyser.fund/project/citadel - @FREEMadeiraOrg: https://geyser.fund/project/freemadeira - @MyfirstBitcoin_: https://geyser.fund/project/miprimerbitcoin
Circular Economies - @BitcoinEkasi: https://geyser.fund/project/bitcoinekasi - Madagascar Bitcoin: https://geyser.fund/project/madagasbit - @BitcoinChatt : https://geyser.fund/project/bitcoinchatt - Uganda Gayaza BTC Market: https://geyser.fund/project/gayazabtcmarket
Activism - Education Bitcoin Channel: https://geyser.fund/project/streamingsats
Sports - The Sats Fighter Journey: https://geyser.fund/project/thesatsfighterjourney
Culture - Bitcoin Tarot Cards: https://geyser.fund/project/bitcointarotcard
originally posted at https://stacker.news/items/973003
-
@ 9bde4214:06ca052b
2025-04-22 18:13:37"It's gonna be permissionless or hell."
Gigi and gzuuus are vibing towards dystopia.
Books & articles mentioned:
- AI 2027
- DVMs were a mistake
- Careless People by Sarah Wynn-Williams
- Takedown by Laila michelwait
- The Ultimate Resource by Julian L. Simon
- Harry Potter by J.K. Rowling
- Momo by Michael Ende
In this dialogue:
- Pablo's Roo Setup
- Tech Hype Cycles
- AI 2027
- Prompt injection and other attacks
- Goose and DVMCP
- Cursor vs Roo Code
- Staying in control thanks to Amber and signing delegation
- Is YOLO mode here to stay?
- What agents to trust?
- What MCP tools to trust?
- What code snippets to trust?
- Everyone will run into the issues of trust and micropayments
- Nostr solves Web of Trust & micropayments natively
- Minimalistic & open usually wins
- DVMCP exists thanks to Totem
- Relays as Tamagochis
- Agents aren't nostr experts, at least not right now
- Fix a mistake once & it's fixed forever
- Giving long-term memory to LLMs
- RAG Databases signed by domain experts
- Human-agent hybrids & Chess
- Nostr beating heart
- Pluggable context & experts
- "You never need an API key for anything"
- Sats and social signaling
- Difficulty-adjusted PoW as a rare-limiting mechanism
- Certificate authorities and centralization
- No solutions to policing speech!
- OAuth and how it centralized
- Login with nostr
- Closed vs open-source models
- Tiny models vs large models
- The minions protocol (Stanford paper)
- Generalist models vs specialized models
- Local compute & encrypted queries
- Blinded compute
- "In the eyes of the state, agents aren't people"
- Agents need identity and money; nostr provides both
- "It's gonna be permissionless or hell"
- We already have marketplaces for MCP stuff, code snippets, and other things
- Most great stuff came from marketplaces (browsers, games, etc)
- Zapstore shows that this is already working
- At scale, central control never works. There's plenty scams and viruses in the app stores.
- Using nostr to archive your user-generated content
- HAVEN, blossom, novia
- The switcharoo from advertisements to training data
- What is Truth?
- What is Real?
- "We're vibing into dystopia"
- Who should be the arbiter of Truth?
- First Amendment & why the Logos is sacred
- Silicon Valley AI bros arrogantly dismiss wisdom and philosophy
- Suicide rates & the meaning crisis
- Are LLMs symbiotic or parasitic?
- The Amish got it right
- Are we gonna make it?
- Careless People by Sarah Wynn-Williams
- Takedown by Laila michelwait
- Harry Potter dementors & Momo's time thieves
- Facebook & Google as non-human (superhuman) agents
- Zapping as a conscious action
- Privacy and the internet
- Plausible deniability thanks to generative models
- Google glasses, glassholes, and Meta's Ray Ben's
- People crave realness
- Bitcoin is the realest money we ever had
- Nostr allows for real and honest expression
- How do we find out what's real?
- Constraints, policing, and chilling effects
- Jesus' plans for DVMCP
- Hzrd's article on how DVMs are broken (DVMs were a mistake)
- Don't believe the hype
- DVMs pre-date MCP tools
- Data Vending Machines were supposed to be stupid: put coin in, get stuff out.
- Self-healing vibe-coding
- IP addresses as scarce assets
- Atomic swaps and the ASS protocol
- More marketplaces, less silos
- The intensity of #SovEng and the last 6 weeks
- If you can vibe-code everything, why build anything?
- Time, the ultimate resource
- What are the LLMs allowed to think?
- Natural language interfaces are inherently dialogical
- Sovereign Engineering is dialogical too
-
@ e6817453:b0ac3c39
2024-12-07 14:54:46Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
RDF and Advanced Topics
Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
@ 40bdcc08:ad00fd2c
2025-05-06 14:24:22Introduction
Bitcoin’s
OP_RETURN
opcode, a mechanism for embedding small data in transactions, has ignited a significant debate within the Bitcoin community. Originally designed to support limited metadata while preserving Bitcoin’s role as a peer-to-peer electronic cash system,OP_RETURN
is now at the center of proposals that could redefine Bitcoin’s identity. The immutable nature of Bitcoin’s timechain makes it an attractive platform for data storage, creating tension with those who prioritize its monetary function. This discussion, particularly around Bitcoin Core pull request #32406 (GitHub PR #32406), highlights a critical juncture for Bitcoin’s future.What is
OP_RETURN
?Introduced in 2014,
OP_RETURN
allows users to attach up to 80 bytes of data to a Bitcoin transaction. Unlike other transaction outputs,OP_RETURN
outputs are provably unspendable, meaning they don’t burden the Unspent Transaction Output (UTXO) set—a critical database for Bitcoin nodes. This feature was a compromise to provide a standardized, less harmful way to include metadata, addressing earlier practices that embedded data in ways that bloated the UTXO set. The 80-byte limit and restriction to oneOP_RETURN
output per transaction are part of Bitcoin Core’s standardness rules, which guide transaction relay and mining but are not enforced by the network’s consensus rules (Bitcoin Stack Exchange).Standardness vs. Consensus Rules
Standardness rules are Bitcoin Core’s default policies for relaying and mining transactions. They differ from consensus rules, which define what transactions are valid across the entire network. For
OP_RETURN
: - Consensus Rules: AllowOP_RETURN
outputs with data up to the maximum script size (approximately 10,000 bytes) and multiple outputs per transaction (Bitcoin Stack Exchange). - Standardness Rules: LimitOP_RETURN
data to 80 bytes and one output per transaction to discourage excessive data storage and maintain network efficiency.Node operators can adjust these policies using settings like
-datacarrier
(enables/disablesOP_RETURN
relay) and-datacarriersize
(sets the maximum data size, defaulting to 83 bytes to account for theOP_RETURN
opcode and pushdata byte). These settings allow flexibility but reflect Bitcoin Core’s default stance on limiting data usage.The Proposal: Pull Request #32406
Bitcoin Core pull request #32406, proposed by developer instagibbs, seeks to relax these standardness restrictions (GitHub PR #32406). Key changes include: - Removing Default Size Limits: The default
-datacarriersize
would be uncapped, allowing largerOP_RETURN
data without a predefined limit. - Allowing Multiple Outputs: The restriction to oneOP_RETURN
output per transaction would be lifted, with the total data size across all outputs subject to a configurable limit. - Deprecating Configuration Options: The-datacarrier
and-datacarriersize
settings are marked as deprecated, signaling potential removal in future releases, which could limit node operators’ ability to enforce custom restrictions.This proposal does not alter consensus rules, meaning miners and nodes can already accept transactions with larger or multiple
OP_RETURN
outputs. Instead, it changes Bitcoin Core’s default relay policy to align with existing practices, such as miners accepting non-standard transactions via services like Marathon Digital’s Slipstream (CoinDesk).Node Operator Flexibility
Currently, node operators can customize
OP_RETURN
handling: - Default Settings: Relay transactions with oneOP_RETURN
output up to 80 bytes. - Custom Settings: Operators can disableOP_RETURN
relay (-datacarrier=0
) or adjust the size limit (e.g.,-datacarriersize=100
). These options remain in #32406 but are deprecated, suggesting that future Bitcoin Core versions might not support such customization, potentially standardizing the uncapped policy.Arguments in Favor of Relaxing Limits
Supporters of pull request #32406 and similar proposals argue that the current restrictions are outdated and ineffective. Their key points include: - Ineffective Limits: Developers bypass the 80-byte limit using methods like Inscriptions, which store data in other transaction parts, often at higher cost and inefficiency (BitcoinDev Mailing List). Relaxing
OP_RETURN
could channel data into a more efficient format. - Preventing UTXO Bloat: By encouragingOP_RETURN
use, which doesn’t affect the UTXO set, the proposal could reduce reliance on harmful alternatives like unspendable Taproot outputs used by projects like Citrea’s Clementine bridge. - Supporting Innovation: Projects like Citrea require more data (e.g., 144 bytes) for security proofs, and relaxed limits could enable new Layer 2 solutions (CryptoSlate). - Code Simplification: Developers like Peter Todd argue that these limits complicate Bitcoin Core’s codebase unnecessarily (CoinGeek). - Aligning with Practice: Miners already process non-standard transactions, and uncapping defaults could improve fee estimation and reduce reliance on out-of-band services, as noted by ismaelsadeeq in the pull request discussion.In the GitHub discussion, developers like Sjors and TheCharlatan expressed support (Concept ACK), citing these efficiency and innovation benefits.
Arguments Against Relaxing Limits
Opponents, including prominent developers and community members, raise significant concerns about the implications of these changes: - Deviation from Bitcoin’s Purpose: Critics like Luke Dashjr, who called the proposal “utter insanity,” argue that Bitcoin’s base layer should prioritize peer-to-peer cash, not data storage (CoinDesk). Jason Hughes warned it could turn Bitcoin into a “worthless altcoin” (BeInCrypto). - Blockchain Bloat: Additional data increases the storage and processing burden on full nodes, potentially making node operation cost-prohibitive and threatening decentralization (CryptoSlate). - Network Congestion: Unrestricted data could lead to “spam” transactions, raising fees and hindering Bitcoin’s use for financial transactions. - Risk of Illicit Content: The timechain’s immutability means data, including potentially illegal or objectionable content, is permanently stored on every node. The 80-byte limit acts as a practical barrier, and relaxing it could exacerbate this issue. - Preserving Consensus: Developers like John Carvalho view the limits as a hard-won community agreement, not to be changed lightly.
In the pull request discussion, nsvrn and moth-oss expressed concerns about spam and centralization, advocating for gradual changes. Concept NACKs from developers like wizkid057 and Luke Dashjr reflect strong opposition.
Community Feedback
The GitHub discussion for pull request #32406 shows a divided community: - Support (Concept ACK): Sjors, polespinasa, ismaelsadeeq, miketwenty1, TheCharlatan, Psifour. - Opposition (Concept NACK): wizkid057, BitcoinMechanic, Retropex, nsvrn, moth-oss, Luke Dashjr. - Other: Peter Todd provided a stale ACK, indicating partial or outdated support.
Additional discussions on the BitcoinDev mailing list and related pull requests (e.g., #32359 by Peter Todd) highlight similar arguments, with #32359 proposing a more aggressive removal of all
OP_RETURN
limits and configuration options (GitHub PR #32359).| Feedback Type | Developers | Key Points | |---------------|------------|------------| | Concept ACK | Sjors, ismaelsadeeq, others | Improves efficiency, supports innovation, aligns with mining practices. | | Concept NACK | Luke Dashjr, wizkid057, others | Risks bloat, spam, centralization, and deviation from Bitcoin’s purpose. | | Stale ACK | Peter Todd | Acknowledges proposal but with reservations or outdated support. |
Workarounds and Their Implications
The existence of workarounds, such as Inscriptions, which exploit SegWit discounts to embed data, is a key argument for relaxing
OP_RETURN
limits. These methods are costlier and less efficient, often costing more thanOP_RETURN
for data under 143 bytes (BitcoinDev Mailing List). Supporters argue that formalizing largerOP_RETURN
data could streamline these use cases. Critics, however, see workarounds as a reason to strengthen, not weaken, restrictions, emphasizing the need to address underlying incentives rather than accommodating bypasses.Ecosystem Pressures
External factors influence the debate: - Miners: Services like Marathon Digital’s Slipstream process non-standard transactions for a fee, showing that market incentives already bypass standardness rules. - Layer 2 Projects: Citrea’s Clementine bridge, requiring more data for security proofs, exemplifies the demand for relaxed limits to support innovative applications. - Community Dynamics: The debate echoes past controversies, like the Ordinals debate, where data storage via inscriptions raised similar concerns about Bitcoin’s purpose (CoinDesk).
Bitcoin’s Identity at Stake
The
OP_RETURN
debate is not merely technical but philosophical, questioning whether Bitcoin should remain a focused monetary system or evolve into a broader data platform. Supporters see relaxed limits as a pragmatic step toward efficiency and innovation, while opponents view them as a risk to Bitcoin’s decentralization, accessibility, and core mission. The community’s decision will have lasting implications, affecting node operators, miners, developers, and users.Conclusion
As Bitcoin navigates this crossroads, the community must balance the potential benefits of relaxed
OP_RETURN
limits—such as improved efficiency and support for new applications—against the risks of blockchain bloat, network congestion, and deviation from its monetary roots. The ongoing discussion, accessible via pull request #32406 on GitHub (GitHub PR #32406). Readers are encouraged to explore the debate and contribute to ensuring that any changes align with Bitcoin’s long-term goals as a decentralized, secure, and reliable system. -
@ e6817453:b0ac3c39
2024-12-07 14:52:47The temporal semantics and temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory and semantic memory. And we also have a lot of theories, right?
Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a hybrid clock or some global clock that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
@ 4ba8e86d:89d32de4
2025-04-21 02:13:56Tutorial feito por nostr:nostr:npub1rc56x0ek0dd303eph523g3chm0wmrs5wdk6vs0ehd0m5fn8t7y4sqra3tk poste original abaixo:
Parte 1 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/263585/tutorial-debloat-de-celulares-android-via-adb-parte-1
Parte 2 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/index.php/263586/tutorial-debloat-de-celulares-android-via-adb-parte-2
Quando o assunto é privacidade em celulares, uma das medidas comumente mencionadas é a remoção de bloatwares do dispositivo, também chamado de debloat. O meio mais eficiente para isso sem dúvidas é a troca de sistema operacional. Custom Rom’s como LineageOS, GrapheneOS, Iodé, CalyxOS, etc, já são bastante enxutos nesse quesito, principalmente quanto não é instalado os G-Apps com o sistema. No entanto, essa prática pode acabar resultando em problemas indesejados como a perca de funções do dispositivo, e até mesmo incompatibilidade com apps bancários, tornando este método mais atrativo para quem possui mais de um dispositivo e separando um apenas para privacidade. Pensando nisso, pessoas que possuem apenas um único dispositivo móvel, que são necessitadas desses apps ou funções, mas, ao mesmo tempo, tem essa visão em prol da privacidade, buscam por um meio-termo entre manter a Stock rom, e não ter seus dados coletados por esses bloatwares. Felizmente, a remoção de bloatwares é possível e pode ser realizada via root, ou mais da maneira que este artigo irá tratar, via adb.
O que são bloatwares?
Bloatware é a junção das palavras bloat (inchar) + software (programa), ou seja, um bloatware é basicamente um programa inútil ou facilmente substituível — colocado em seu dispositivo previamente pela fabricante e operadora — que está no seu dispositivo apenas ocupando espaço de armazenamento, consumindo memória RAM e pior, coletando seus dados e enviando para servidores externos, além de serem mais pontos de vulnerabilidades.
O que é o adb?
O Android Debug Brigde, ou apenas adb, é uma ferramenta que se utiliza das permissões de usuário shell e permite o envio de comandos vindo de um computador para um dispositivo Android exigindo apenas que a depuração USB esteja ativa, mas também pode ser usada diretamente no celular a partir do Android 11, com o uso do Termux e a depuração sem fio (ou depuração wifi). A ferramenta funciona normalmente em dispositivos sem root, e também funciona caso o celular esteja em Recovery Mode.
Requisitos:
Para computadores:
• Depuração USB ativa no celular; • Computador com adb; • Cabo USB;
Para celulares:
• Depuração sem fio (ou depuração wifi) ativa no celular; • Termux; • Android 11 ou superior;
Para ambos:
• Firewall NetGuard instalado e configurado no celular; • Lista de bloatwares para seu dispositivo;
Ativação de depuração:
Para ativar a Depuração USB em seu dispositivo, pesquise como ativar as opções de desenvolvedor de seu dispositivo, e lá ative a depuração. No caso da depuração sem fio, sua ativação irá ser necessária apenas no momento que for conectar o dispositivo ao Termux.
Instalação e configuração do NetGuard
O NetGuard pode ser instalado através da própria Google Play Store, mas de preferência instale pela F-Droid ou Github para evitar telemetria.
F-Droid: https://f-droid.org/packages/eu.faircode.netguard/
Github: https://github.com/M66B/NetGuard/releases
Após instalado, configure da seguinte maneira:
Configurações → padrões (lista branca/negra) → ative as 3 primeiras opções (bloquear wifi, bloquear dados móveis e aplicar regras ‘quando tela estiver ligada’);
Configurações → opções avançadas → ative as duas primeiras (administrar aplicativos do sistema e registrar acesso a internet);
Com isso, todos os apps estarão sendo bloqueados de acessar a internet, seja por wifi ou dados móveis, e na página principal do app basta permitir o acesso a rede para os apps que você vai usar (se necessário). Permita que o app rode em segundo plano sem restrição da otimização de bateria, assim quando o celular ligar, ele já estará ativo.
Lista de bloatwares
Nem todos os bloatwares são genéricos, haverá bloatwares diferentes conforme a marca, modelo, versão do Android, e até mesmo região.
Para obter uma lista de bloatwares de seu dispositivo, caso seu aparelho já possua um tempo de existência, você encontrará listas prontas facilmente apenas pesquisando por elas. Supondo que temos um Samsung Galaxy Note 10 Plus em mãos, basta pesquisar em seu motor de busca por:
Samsung Galaxy Note 10 Plus bloatware list
Provavelmente essas listas já terão inclusas todos os bloatwares das mais diversas regiões, lhe poupando o trabalho de buscar por alguma lista mais específica.
Caso seu aparelho seja muito recente, e/ou não encontre uma lista pronta de bloatwares, devo dizer que você acaba de pegar em merda, pois é chato para um caralho pesquisar por cada aplicação para saber sua função, se é essencial para o sistema ou se é facilmente substituível.
De antemão já aviso, que mais para frente, caso vossa gostosura remova um desses aplicativos que era essencial para o sistema sem saber, vai acabar resultando na perda de alguma função importante, ou pior, ao reiniciar o aparelho o sistema pode estar quebrado, lhe obrigando a seguir com uma formatação, e repetir todo o processo novamente.
Download do adb em computadores
Para usar a ferramenta do adb em computadores, basta baixar o pacote chamado SDK platform-tools, disponível através deste link: https://developer.android.com/tools/releases/platform-tools. Por ele, você consegue o download para Windows, Mac e Linux.
Uma vez baixado, basta extrair o arquivo zipado, contendo dentro dele uma pasta chamada platform-tools que basta ser aberta no terminal para se usar o adb.
Download do adb em celulares com Termux.
Para usar a ferramenta do adb diretamente no celular, antes temos que baixar o app Termux, que é um emulador de terminal linux, e já possui o adb em seu repositório. Você encontra o app na Google Play Store, mas novamente recomendo baixar pela F-Droid ou diretamente no Github do projeto.
F-Droid: https://f-droid.org/en/packages/com.termux/
Github: https://github.com/termux/termux-app/releases
Processo de debloat
Antes de iniciarmos, é importante deixar claro que não é para você sair removendo todos os bloatwares de cara sem mais nem menos, afinal alguns deles precisam antes ser substituídos, podem ser essenciais para você para alguma atividade ou função, ou até mesmo são insubstituíveis.
Alguns exemplos de bloatwares que a substituição é necessária antes da remoção, é o Launcher, afinal, é a interface gráfica do sistema, e o teclado, que sem ele só é possível digitar com teclado externo. O Launcher e teclado podem ser substituídos por quaisquer outros, minha recomendação pessoal é por aqueles que respeitam sua privacidade, como Pie Launcher e Simple Laucher, enquanto o teclado pelo OpenBoard e FlorisBoard, todos open-source e disponíveis da F-Droid.
Identifique entre a lista de bloatwares, quais você gosta, precisa ou prefere não substituir, de maneira alguma você é obrigado a remover todos os bloatwares possíveis, modifique seu sistema a seu bel-prazer. O NetGuard lista todos os apps do celular com o nome do pacote, com isso você pode filtrar bem qual deles não remover.
Um exemplo claro de bloatware insubstituível e, portanto, não pode ser removido, é o com.android.mtp, um protocolo onde sua função é auxiliar a comunicação do dispositivo com um computador via USB, mas por algum motivo, tem acesso a rede e se comunica frequentemente com servidores externos. Para esses casos, e melhor solução mesmo é bloquear o acesso a rede desses bloatwares com o NetGuard.
MTP tentando comunicação com servidores externos:
Executando o adb shell
No computador
Faça backup de todos os seus arquivos importantes para algum armazenamento externo, e formate seu celular com o hard reset. Após a formatação, e a ativação da depuração USB, conecte seu aparelho e o pc com o auxílio de um cabo USB. Muito provavelmente seu dispositivo irá apenas começar a carregar, por isso permita a transferência de dados, para que o computador consiga se comunicar normalmente com o celular.
Já no pc, abra a pasta platform-tools dentro do terminal, e execute o seguinte comando:
./adb start-server
O resultado deve ser:
daemon not running; starting now at tcp:5037 daemon started successfully
E caso não apareça nada, execute:
./adb kill-server
E inicie novamente.
Com o adb conectado ao celular, execute:
./adb shell
Para poder executar comandos diretamente para o dispositivo. No meu caso, meu celular é um Redmi Note 8 Pro, codinome Begonia.
Logo o resultado deve ser:
begonia:/ $
Caso ocorra algum erro do tipo:
adb: device unauthorized. This adb server’s $ADB_VENDOR_KEYS is not set Try ‘adb kill-server’ if that seems wrong. Otherwise check for a confirmation dialog on your device.
Verifique no celular se apareceu alguma confirmação para autorizar a depuração USB, caso sim, autorize e tente novamente. Caso não apareça nada, execute o kill-server e repita o processo.
No celular
Após realizar o mesmo processo de backup e hard reset citado anteriormente, instale o Termux e, com ele iniciado, execute o comando:
pkg install android-tools
Quando surgir a mensagem “Do you want to continue? [Y/n]”, basta dar enter novamente que já aceita e finaliza a instalação
Agora, vá até as opções de desenvolvedor, e ative a depuração sem fio. Dentro das opções da depuração sem fio, terá uma opção de emparelhamento do dispositivo com um código, que irá informar para você um código em emparelhamento, com um endereço IP e porta, que será usado para a conexão com o Termux.
Para facilitar o processo, recomendo que abra tanto as configurações quanto o Termux ao mesmo tempo, e divida a tela com os dois app’s, como da maneira a seguir:
Para parear o Termux com o dispositivo, não é necessário digitar o ip informado, basta trocar por “localhost”, já a porta e o código de emparelhamento, deve ser digitado exatamente como informado. Execute:
adb pair localhost:porta CódigoDeEmparelhamento
De acordo com a imagem mostrada anteriormente, o comando ficaria “adb pair localhost:41255 757495”.
Com o dispositivo emparelhado com o Termux, agora basta conectar para conseguir executar os comandos, para isso execute:
adb connect localhost:porta
Obs: a porta que você deve informar neste comando não é a mesma informada com o código de emparelhamento, e sim a informada na tela principal da depuração sem fio.
Pronto! Termux e adb conectado com sucesso ao dispositivo, agora basta executar normalmente o adb shell:
adb shell
Remoção na prática Com o adb shell executado, você está pronto para remover os bloatwares. No meu caso, irei mostrar apenas a remoção de um app (Google Maps), já que o comando é o mesmo para qualquer outro, mudando apenas o nome do pacote.
Dentro do NetGuard, verificando as informações do Google Maps:
Podemos ver que mesmo fora de uso, e com a localização do dispositivo desativado, o app está tentando loucamente se comunicar com servidores externos, e informar sabe-se lá que peste. Mas sem novidades até aqui, o mais importante é que podemos ver que o nome do pacote do Google Maps é com.google.android.apps.maps, e para o remover do celular, basta executar:
pm uninstall –user 0 com.google.android.apps.maps
E pronto, bloatware removido! Agora basta repetir o processo para o resto dos bloatwares, trocando apenas o nome do pacote.
Para acelerar o processo, você pode já criar uma lista do bloco de notas com os comandos, e quando colar no terminal, irá executar um atrás do outro.
Exemplo de lista:
Caso a donzela tenha removido alguma coisa sem querer, também é possível recuperar o pacote com o comando:
cmd package install-existing nome.do.pacote
Pós-debloat
Após limpar o máximo possível o seu sistema, reinicie o aparelho, caso entre no como recovery e não seja possível dar reboot, significa que você removeu algum app “essencial” para o sistema, e terá que formatar o aparelho e repetir toda a remoção novamente, desta vez removendo poucos bloatwares de uma vez, e reiniciando o aparelho até descobrir qual deles não pode ser removido. Sim, dá trabalho… quem mandou querer privacidade?
Caso o aparelho reinicie normalmente após a remoção, parabéns, agora basta usar seu celular como bem entender! Mantenha o NetGuard sempre executando e os bloatwares que não foram possíveis remover não irão se comunicar com servidores externos, passe a usar apps open source da F-Droid e instale outros apps através da Aurora Store ao invés da Google Play Store.
Referências: Caso você seja um Australopithecus e tenha achado este guia difícil, eis uma videoaula (3:14:40) do Anderson do canal Ciberdef, realizando todo o processo: http://odysee.com/@zai:5/Como-remover-at%C3%A9-200-APLICATIVOS-que-colocam-a-sua-PRIVACIDADE-E-SEGURAN%C3%87A-em-risco.:4?lid=6d50f40314eee7e2f218536d9e5d300290931d23
Pdf’s do Anderson citados na videoaula: créditos ao anon6837264 http://eternalcbrzpicytj4zyguygpmkjlkddxob7tptlr25cdipe5svyqoqd.onion/file/3863a834d29285d397b73a4af6fb1bbe67c888d72d30/t-05e63192d02ffd.pdf
Processo de instalação do Termux e adb no celular: https://youtu.be/APolZrPHSms
-
@ a39d19ec:3d88f61e
2024-11-21 12:05:09A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
@ b99efe77:f3de3616
2025-05-09 19:47:09🚦Traffic Light Control System🚦
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ a39d19ec:3d88f61e
2024-11-17 10:48:56This week's functional 3d print is the "Dino Clip".
Dino Clip
I printed it some years ago for my son, so he would have his own clip for cereal bags.
Now it is used to hold a bag of dog food close.
The design by "Sneaks" is a so called "print in place". This means that the whole clip with moving parts is printed in one part, without the need for assembly after the print.
The clip is very strong, and I would print it again if I need a "heavy duty" clip for more rigid or big bags. Link to the file at Printables
-
@ f72e682e:c51af867
2025-05-06 10:35:01All across the Lightning Network we can detect quite a lot of nodes, specially new nodes but also old nodes, that show a concerning lack of good node operation which impedes proper routing. I’ve seen nodes with a variable capacity whose channels are stagnant and non performant, which raises a question: what is the point on maintaining a public node if you are not able to route and dynamically assign resources as needed? Certainly it is a useless node, and channels of those nodes with other nodes better maintained are also useless, not because the fault of the good ones, but because the fault of the bad ones, which makes the whole network not as performant and great as it should be.
For the shake of improving the Lightning Network, I have created this guide, so every node out there can become useful, and, also, will greatly improve gains in routing for itself. Do not expect to become rich or even live out of routing fees, that is impossible unless you have a node with 100 or more BTC in 2025, but at least, a node should be able to cover its own maintenance costs; its the idea. Problem is that, currently, most nodes run on a loss, and that is highly related with the fee policy and the choice of nodes that they connect to. Let’s put an end to this. Here you will learn how to, at least, earn enough to cover electricity of your node, and with luck, a bit more.
Current earnings cover electricity and the payment of my node:
3K sats per day might not seem much for a 5 BTC capacity (2.5 BTC real outbound) node, but the screenshot was taken in a bad day, when the mempool was empty. I took the screenshot of a bad day on purpose, to prove my point. Some other days, specially when Bitcoin is going bullish and it is used a lot, I have seen 20K per day. A quick calculation brings around 1M sats per year at a minimum, more than enough to pay electricity, the machine, and even a bit more for beers and fun! Real gains across the year could be closer to around 5M in my case, which is not bad. And what is incredible, I maintain general low fees for most of the cases, except when I have no liquidity in the channel which must be high, as you will understand later in this article. So if you double or triple my recommended fees I would expect quite a lot more earnings. So expect gains of around 2% of the total capacity (4% for the amount you put in) per year minimum, and any extra beyond that by fine-tuning my recommendations I'm sure it will be very welcomed by you!
Step 1: put the node in a good machine
Please, don’t use an old computer or laptop, unless you change the SSD for a new one. Bitcoin and lightning uses the SSD heavily, which means it will fry it sooner or later. That is so that I recommend changing the SSD every 2 years even it it still feels good. If your SSD dies during operation, expect big loses. I’ve seen this so many times, and it also happened to me, that I am very serious about recommending it. Also, please use only Linux with ext4 file format, other formats, including ZFS, I’ve seen failing badly. If your filesystem fails, the sqlite db that LND or CLN uses will fail and you will force close many if not most channels, with big fees for onchain closings, which will totally ruin all your gains. You have been warned!
Also, please take your time to configure a clearnet (ipv4) address. Do not rely only on Tor, because Tor is slow and unreliable, specially when updating channel states on the gossip, which you will be doing a lot. Of course, configure Tor also, but as secondary, because too many nodes are Tor only, which is unfortunate.
Step 2: connect to good nodes
As a public node operator, your duty is to connect to as many nodes as possible, but first, to good reputable nodes. Your first 10 channels should be with big nodes and service providers, like exchanges, wallets, but also to very well positioned big nodes. Take your time to select these 10 first nodes and connect to the ones you think will improve your position in the network. Don’t choose the first 10 biggest, take your time to study the fees. Select nodes that use a wide range of fees, from 0 to 1000ppm. Don’t discard a node because you see some channels with high fees, it could only mean that they have no liquidity right now in that channel. But if all its channels have high fees, or at least all small channels have high fees, then discard it.
Then, when you have your first 10 big nodes connected, go ahead and go to https://lightningnetwork.plus/ to choose less popular ones. You need them, because you seek to fill the voids between smaller nodes, it is what most of your revenue will come from. Always try to do swaps, use the liquidity pool later when you have enough total inbound liquidity. Remember that total capacity is not total outbound. Total capacity is total outbound + total inbound. So you can start with 0.25BTC of your own, but total capacity could be much higher if other peers have open channels to you.
A proper public node should have a minimum of 50 channels at its peak. It doesn’t matter much the size of the channels, but the quantity and the quality. A node with 50x500k sized channels will usually perform 10x better than a node with 5x5M sized channels, even if they have the same total capacity. This is because more opportunities to route will be found if you have more channels, which means you are much better positioned.
Anyway, the minimum recommended is 1M per channel because most HTCLs are 100k to 500k and less than 1M will wipe out all your liquidity in the channels in one or two routings. This could change in the future because of the Bitcoin price, but in 2025 this is the state of things. But if you don’t have 0.25 BTC to open 50 channels (25 open by you, 25 by others using swaps), just use smaller channels, don't let your available liquidity to crush your excitement, who knows what is the future ahead us! Remember that we are just at the beginning of this technology and there is nothing that impides your channels to be open for the next 20 years when 1BTC=$1M! I would put the ultra minimum at 250k per channel, which means a 12.5M node (6.25M required sats to start with), but even that is too precarious in 2025. But hopefully not in the future! If you have less than that my honest recommendation is to run a private node and open private channels only, and only if you absolutely need a node because you have to provide a service for multiple people and you can't conform to use simpler wallets. Right now, I can think of only one example of requiring an ultra-small node instead of wallets, which is using LNBits to service your small business or family. Be aware, anyway, that a 12.5M node will definitely not cover your node running costs in 2025, it is just an investment and positioning for a future!
In any case, never, ever, put all your BTC in a LN node, at most one third of your bitcoins and only when you are confident.
Also remember you have to be online 24/7. Please, don’t setup a node if you can’t. Remember you are providing a constant service, not an intermittent one. This guide won’t work if you are not committed to this rule.
Step 3: understand the flow
I’ve seen too many node operators that do not understand how payments are routed, and this is a big problem, because this is the base of everything we do with a LN node.
Payments go from one node to another to another to another until it reaches destination. Each node has what is called an outbound fee. This fee controls how much does it cost to route a payment through that node. If the fee is low it is considered attractive and other nodes will prefer to use that route. If the fee is high, it is obvious that nodes will not choose that route unless there is no other way.
But there is a problem here: all channels have a liquidity limit. If a channel has 1M liquidity and a payment of 500k comes through it, then now the channel has 500k liquidity, that is, a ratio of 0.5. If another 400k comes through, now it has 100k liquidity and a ratio of 0.1. If now somebody tries to route a 200k payment through that channel, and error will happen, because it doesn’t have enough liquidity. It is called an HTCL failure, and this are quite normal. Liquidity can come backwards, which means that now that channel becomes the income instead of the outcome, so if 300k comes in, in the example above, now the liquidity ratio is 0.4 (100k already there plus 300k that just came in). So it is easy to understand that liquidity is very volatile: it will come in and out with any successful in or out HTLC.
The problem is: how do you know if a channel has liquidity? For privacy reasons, the liquidity of a channel is never announced, and only the two connected nodes know it. This is logical, to avoid bad actors to figure out which payments have been done by other people. So the only possible solution is to try all connected channels you have until one lets you go through because it has enough liquidity. And it is going to be done, always, in the order of outbound fees, from low to high. So the channel that has the lowest fee with enough liquidity, will catch the prize.
There is a way to signal that you have liquidity or you don’t, and it is based on scarcity: if you don’t have much liquidity, you increase the outbound fee, so other nodes will not find attractive to route through you in that direction. You don't have much liquidity, so why bother to allow routing? But, when you have again outbound liquidity, because other nodes have taken the opposite direction (inbound) using another channel of yours which has liquidity (as outbound), you intelligently lower the fees to signal your new updated increased liquidity in the channel. So, the idea is simple: if you have liquidity in the channel, you put low fees, if you don’t have liquidity, you put high fees. Please read that again until you fully understand it, it is extremely important.
There is another concept introduced by LND which is negative inbound fees: if you put negative inbound fees, for example -100ppm, it means that any payment going from that inbound channel to another of your outbound channels, will have a maximum discount of 100ppm. (Don’t worry, you will never lose because LND forbids to route losing money, so 100ppm is the maximum, but it could be less if the outgoing channel has less than 100ppm fees.) What this does is to encourage the filling of empty channels at the cost of earning less in channels with plenty of liquidity. This is very good, because it will automatically rebalance your extremes: channels with no liquidity will be filled up, channels with plenty of liquidity will be emptied down, creating a balance.
It is obvious that the total ratio, including all your channels, should be around 100%. That means that the total amount summing all channels of inbound and outbound should be approximately the same. Don’t get obsessed with this, 80% or 120% is ok too, but if it is lower or higher than that you should take measures to open or close channels, or even swap out or in using boltz.exchange or LOOP.
Step 4: managing fees
So, in order to make proper routing, you will have to constantly monitor all your channels on a regular basis. Minimum recommended frequency is once a day. You can do this automatically or manually. Some people prefer to do it manually because each channel has its own characteristics and some fees work better than others, which is something you learn with time observing the flow. But some other people, like me, don’t want to spend so much time doing so, and do automatic fee management using charge-lnd or lndg automators. A mixture of both styles is possible by disabling automatic fee management for selected channels.
Every node operator has his/her own preferences, but here are some basic recommendations that you can tweak over time as you acquire experience:
ratio > 0.98: fees 0 (or less than 10) 0.2 < ratio < 0.98: fees proportional max 128, min 16 0.2 > ratio > 0.05: fees 500, inbound -16 ratio < 0.05: fees 1000, inbound -64 ratio = 0: fees more than 1000, inbound -128
So, as you can see, when the channel is full we encourage routing, when the channel is more balanced is when the earnings will occur (from 16 to 128ppm), when the channel is mostly empty we discourage forward routing (500ppm) but encourage backwards routing (inbound -16) and when it is almost empty we clearly totally discourage forward routing (1000ppm) but encourage backwards routing (inbound -64). And when someone just opened a channel with us, all liquidity is theirs so we aggressively encourage inbound routing by putting ultra high outbound fees and ultra inbound discounts. Simple, eh?Step 5: automatic fee management
As stated before, you can automate this using charge-lnd or lndg or Lightning Terminal if you use LND. If you use CLN you are probably limited to create a personalized script, because I don’t know of any similar tool for it, apart from CL-BOSS which is unmaintained and non-customizable.
You will run this configuration a maximum of once per hour, and a minimum of once per day. You should not try to run it more frequent than once per hour because of two reasons: 1. The channel states stored in the gossip take from some minutes to some hours to properly propagate. 2. Some nodes will ban you if you try to update more than once per hour. What I recommend is once every 2 hours for big nodes with more than 50 channels. If you have less than 50 channels, your gossip will be slow to propagate so run it once a day. If you get many “Insufficient Fee” errors is because you are trying to update channel states too frequently. Also, some people report that increasing the variable numgraphsyncpeers in the LND configuration file helps with better propagation, but be aware that this will increase bandwidth usage.
I’ve been using lndg for some time, but I switched to charge-lnd because it is clearly superior and faster and more customizable. Lndg is still great for rebalancing (which I use a lot) and as a general interface, but I have disabled the fee management, which I now do with charge-lnd. If you can’t access charge-lnd then just use lndg with the frequency chosen above, but be aware that the configuration parameters are very limited, as you will soon realize (you are limited to just one strategy which is proportional, and it is very slow as it changes the fee in incremental steps). Yet it is better using lndg than nothing.
Lightning Terminal from Lightning Labs I have not tested. So I can’t say anything about it.
But here is a good starting configuration for charge-lnd that you can customize to your preferences:
``` [default]
'default' is special, it is used if no other policy matches a channel
strategy = static base_fee_msat = 128 fee_ppm = 96 inbound_base_fee_msat = 0 inbound_fee_ppm = 0 min_fee_ppm_delta=20
[mydefaults]
no strategy, so this only sets some defaults
base_fee_msat = 128 min_fee_ppm_delta = 0
[lost-onchain-sync]
The fact that lnd was not synchronized with the chain for more than 5 minutes
was an indicator of a severe problem in the past.
onchain.synced_to_chain = false base_fee_msat = 210_000 fee_ppm = 210_000
[expensive]
match channels where the peer node has set a high (>=8_000 ppm) fee rate
and set the same fee rate on our side (strategy=match_peer)
chan.min_fee_ppm = 8000 strategy = match_peer
[leafnode]
charge non-routing (private=true) peers a bit more for our service
chan.private = true strategy = static fee_ppm = 1000
[encourage-routing]
'autobalance' (lower fees so using outbound is more attractive)
chan.min_ratio = 0.98 inbound_base_fee_msat = 0 inbound_fee_ppm = 0 strategy = static base_fee_msat = 64 fee_ppm = 16
[discourage-routing]
'autobalance' (higher fees so using outbound is less attractive)
chan.max_ratio = 0.2 chan.min_ratio = 0.05 strategy = proportional inbound_base_fee_msat = -64 inbound_fee_ppm = -16 min_fee_ppm = 32 max_fee_ppm = 700 base_fee_msat = 1_000
[all-liquidity-is-theirs] chan.max_ratio = 0.00 inbound_base_fee_msat = -128 inbound_fee_ppm = -128 strategy = static base_fee_msat = 1_000 fee_ppm = 1000
[discourage-routing-extreme] chan.max_ratio = 0.05 inbound_base_fee_msat = -128 inbound_fee_ppm = -32 strategy = proportional min_fee_ppm = 32 max_fee_ppm = 1000 base_fee_msat = 1_000
[proportional]
'proportional' can also be used to auto balance (lower fee rate when low remote balance & higher rate when higher remote balance)
fee_ppm decreases linearly with the channel balance ratio (min_fee_ppm when ratio is 1, max_fee_ppm when ratio is 0)
20% excess:
chan.min_ratio = 0.2 chan.max_ratio = 0.98 strategy = proportional min_fee_ppm = 32
20% excess, so for a max of 128, it’s calculated 128/(1-0.20)=160
max_fee_ppm = 160 inbound_base_fee_msat = 0 inbound_fee_ppm = 0 base_fee_msat = 128 min_fee_ppm_delta=16 ```
So you might run this config in a crontab or with your node distribution script if it is provided. I think Umbrel has this app in their portfolio, so just use it if you have Umbrel and ignore the following. If you run it manually or with a distro that doesn’t have charge-lnd, you can configure a crontab. This is just an example, please ask support for proper configuration on your distro. And if you distro do not include charge-lnd, ask support to include it, at this point it’s quite a necessity. Anyway here is the manual configuration: ``` $ crontab -e
0 */2 * * * echo "=======>"
date
>> /home/nodo/charge-lnd/log && /home/nodo/charge-lnd/env/bin/charge-lnd -c /home/nodo/charge-lnd/my.config >> /home/nodo/charge-lnd/log ```That is supposing charge-lnd executable is installed under /home/nodo/charge-lnd/env/bin/charge-lnd and config is in /home/nodo/charge-lnd/my.config and LND is running without docker. If it is running under docker, you will have to ask support of your distro.
Step 6: help your peers
Remember that your peers are not only your competition, they are also your customers. So it is a strange symbiosis: you compete with them, but they also help you (and you help them).
If your peers are not well informed and have a bad maintained node, you are in a loss, because your channels with them will get stagnant and will not route. If they are well informed and know how to manage a node, then the channels will not be stagnant and they will route through you.
So it is stupid to keep this information as a secret. Every node operator should know it. And the more people know it, the better for everybody.
So, please, if you detect stagnant channels and bad maintained peers connected to you, just lead them to this guide, or guide them yourself. It’s a good idea to bookmark this guide so you have it prepared for the future.
And that’s it!! Happy routing!!
originally posted at https://stacker.news/items/972730
-
@ a367f9eb:0633efea
2024-11-05 08:48:41Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Originally published on the website of the Consumer Choice Center.
-
@ b99efe77:f3de3616
2025-05-09 19:44:37🚦Traffic Light Control System🚦
This Petri net represents a traffic control protocol ensuring that two traffic lights alternate safely and are never both green at the same time.
petrinet ;start () -> greenLight1 redLight2 ;toRed1 greenLight1 -> queue redLight1 ;toGreen2 redLight2 queue -> greenLight2 ;toGreen1 queue redLight1 -> greenLight1 ;toRed2 greenLight2 -> redLight2 queue ;stop redLight1 queue redLight2 -> ()
-
@ 2b24a1fa:17750f64
2025-05-09 19:43:28 -
@ 4ba8e86d:89d32de4
2025-04-21 02:12:19SISTEMA OPERACIONAL MÓVEIS
GrapheneOS : https://njump.me/nevent1qqs8t76evdgrg4qegdtyrq2rved63pr29wlqyj627n9tj4vlu66tqpqpzdmhxue69uhk7enxvd5xz6tw9ec82c30qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqppcqec9
CalyxOS : https://njump.me/nevent1qqsrm0lws2atln2kt3cqjacathnw0uj0jsxwklt37p7t380hl8mmstcpydmhxue69uhkummnw3ez6an9wf5kv6t9vsh8wetvd3hhyer9wghxuet59uq3vamnwvaz7tmwdaehgu3wvf3kstnwd9hx5cf0qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgcwaehxw309aex2mrp0yhxxatjwfjkuapwveukjtcpzpmhxue69uhkummnw3ezumt0d5hszrnhwden5te0dehhxtnvdakz7qfywaehxw309ahx7um5wgh8ymm4dej8ymmrdd3xjarrda5kuetjwvhxxmmd9uq3uamnwvaz7tmwdaehgu3dv3jhvtnhv4kxcmmjv3jhytnwv46z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309ahx7um5wghxymmwva3x7mn89e3k7mf0qythwumn8ghj7cn5vvhxkmr9dejxz7n49e3k7mf0qyg8wumn8ghj7mn09eehgu3wvdez7smttdu
LineageOS : https://njump.me/nevent1qqsgw7sr36gaty48cf4snw0ezg5mg4atzhqayuge752esd469p26qfgpzdmhxue69uhhwmm59e6hg7r09ehkuef0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpnvm779
SISTEMA OPERACIONAL DESKTOP
Tails : https://njump.me/nevent1qqsf09ztvuu60g6xprazv2vxqqy5qlxjs4dkc9d36ta48q75cs9le4qpzemhxue69uhkummnw3ex2mrfw3jhxtn0wfnj7q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqz34ag5t
Qubes OS : https://njump.me/nevent1qqsp6jujgwl68uvurw0cw3hfhr40xq20sj7rl3z4yzwnhp9sdpa7augpzpmhxue69uhkummnw3ezumt0d5hsz9mhwden5te0wfjkccte9ehx7um5wghxyctwvshsz9thwden5te0dehhxarj9ehhsarj9ejx2a30qyg8wumn8ghj7mn09eehgu3wvdez7qg4waehxw309aex2mrp0yhxgctdw4eju6t09uqjxamnwvaz7tmwdaehgu3dwejhy6txd9jkgtnhv4kxcmmjv3jhytnwv46z7qgwwaehxw309ahx7uewd3hkctcpremhxue69uhkummnw3ez6er9wch8wetvd3hhyer9wghxuet59uj3ljr8
Kali linux : https://njump.me/nevent1qqswlav72xdvamuyp9xc38c6t7070l3n2uxu67ssmal2g7gv35nmvhspzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqswt9rxe
Whonix : https://njump.me/nevent1qqs85gvejvzhk086lwh6edma7fv07p5c3wnwnxnzthwwntg2x6773egpydmhxue69uhkummnw3ez6an9wf5kv6t9vsh8wetvd3hhyer9wghxuet59uq3qamnwvaz7tmwdaehgu3wd4hk6tcpzemhxue69uhkummnw3ezucnrdqhxu6twdfsj7qfywaehxw309ahx7um5wgh8ymm4dej8ymmrdd3xjarrda5kuetjwvhxxmmd9uq3wamnwvaz7tmzw33ju6mvv4hxgct6w5hxxmmd9uq3qamnwvaz7tmwduh8xarj9e3hytcpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszrnhwden5te0dehhxtnvdakz7qg7waehxw309ahx7um5wgkkgetk9emk2mrvdaexgetj9ehx2ap0sen9p6
Kodachi : https://njump.me/nevent1qqsf5zszgurpd0vwdznzk98hck294zygw0s8dah6fpd309ecpreqtrgpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszgmhwden5te0dehhxarj94mx2unfve5k2epwwajkcmr0wfjx2u3wdejhgtcpremhxue69uhkummnw3ez6er9wch8wetvd3hhyer9wghxuet59uq3qamnwvaz7tmwdaehgu3wd4hk6tcpzamhxue69uhkyarr9e4kcetwv3sh5afwvdhk6tcpzpmhxue69uhkumewwd68ytnrwghszfrhwden5te0dehhxarj9eex7atwv3ex7cmtvf5hgcm0d9hx2unn9e3k7mf0qyvhwumn8ghj7mn0wd68ytnzdahxwcn0denjucm0d5hszrnhwden5te0dehhxtnvdakz7qgkwaehxw309ahx7um5wghxycmg9ehxjmn2vyhsz9mhwden5te0wfjkccte9ehx7um5wghxyctwvshs94a4d5
PGP
Openkeychain : https://njump.me/nevent1qqs9qtjgsulp76t7jkquf8nk8txs2ftsr0qke6mjmsc2svtwfvswzyqpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs36mp0w
Kleopatra : https://njump.me/nevent1qqspnevn932hdggvp4zam6mfyce0hmnxsp9wp8htpumq9vm3anq6etsppemhxue69uhkummn9ekx7mp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpuaeghp
Pgp : https://njump.me/nevent1qqsggek707qf3rzttextmgqhym6d4g479jdnlnj78j96y0ut0x9nemcpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqgptemhe
Como funciona o PGP? : https://njump.me/nevent1qqsz9r7azc8pkvfmkg2hv0nufaexjtnvga0yl85x9hu7ptpg20gxxpspremhxue69uhkummnw3ez6ur4vgh8wetvd3hhyer9wghxuet59upzqjagapkjm9ufdhynxlp72qrfrzfawvt4wt7cr795rhw6tkyaxt0yqvzqqqqqqy259fhs
Por que eu escrevi PGP. - Philip Zimmermann.
https://njump.me/nevent1qqsvysn94gm8prxn3jw04r0xwc6sngkskg756z48jsyrmqssvxtm7ncpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtchzxnad
VPN
Vpn : https://njump.me/nevent1qqs27ltgsr6mh4ffpseexz6s37355df3zsur709d0s89u2nugpcygsspzpmhxue69uhkummnw3ezumt0d5hsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqshzu2fk
InviZible Pro : https://njump.me/nevent1qqsvyevf2vld23a3xrpvarc72ndpcmfvc3lc45jej0j5kcsg36jq53cpz3mhxue69uhhyetvv9ujuerpd46hxtnfdupzqjagapkjm9ufdhynxlp72qrfrzfawvt4wt7cr795rhw6tkyaxt0yqvzqqqqqqy33y5l4
Orbot: https://njump.me/nevent1qqsxswkyt6pe34egxp9w70cy83h40ururj6m9sxjdmfass4cjm4495stft593
I2P
i2p : https://njump.me/nevent1qqsvnj8n983r4knwjmnkfyum242q4c0cnd338l4z8p0m6xsmx89mxkslx0pgg
Entendendo e usando a rede I2P : https://njump.me/nevent1qqsxchp5ycpatjf5s4ag25jkawmw6kkf64vl43vnprxdcwrpnms9qkcppemhxue69uhkummn9ekx7mp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpvht4mn
Criando e acessando sua conta Email na I2P : https://njump.me/nevent1qqs9v9dz897kh8e5lfar0dl7ljltf2fpdathsn3dkdsq7wg4ksr8xfgpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpw8mzum
APLICATIVO 2FA
Aegis Authenticator : https://njump.me/nevent1qqsfttdwcn9equlrmtf9n6wee7lqntppzm03pzdcj4cdnxel3pz44zspz4mhxue69uhhyetvv9ujumn0wd68ytnzvuhsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqscvtydq
YubiKey : https://njump.me/nevent1qqstsnn69y4sf4330n7039zxm7wza3ch7sn6plhzmd57w6j9jssavtspvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzueyvgt
GERENCIADOR DE SENHAS
KeepassDX: https://njump.me/nevent1qqswc850dr4ujvxnmpx75jauflf4arc93pqsty5pv8hxdm7lcw8ee8qpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpe0492n
Birwaden: https://njump.me/nevent1qqs0j5x9guk2v6xumhwqmftmcz736m9nm9wzacqwjarxmh8k4xdyzwgpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpwfe2kc
KeePassXC: https://njump.me/nevent1qqsgftcrd8eau7tzr2p9lecuaf7z8mx5jl9w2k66ae3lzkw5wqcy5pcl2achp
CHAT MENSAGEM
SimpleXchat : https://njump.me/nevent1qqsds5xselnnu0dyy0j49peuun72snxcgn3u55d2320n37rja9gk8lgzyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqgmcmj7c
Briar : https://njump.me/nevent1qqs8rrtgvjr499hreugetrl7adkhsj2zextyfsukq5aa7wxthrgcqcg05n434
Element Messenger : https://njump.me/nevent1qqsq05snlqtxm5cpzkshlf8n5d5rj9383vjytkvqp5gta37hpuwt4mqyccee6
Pidgin : https://njump.me/nevent1qqsz7kngycyx7meckx53xk8ahk98jkh400usrvykh480xa4ct9zlx2c2ywvx3
E-MAIL
Thunderbird: https://njump.me/nevent1qqspq64gg0nw7t60zsvea5eykgrm43paz845e4jn74muw5qzdvve7uqrkwtjh
ProtonMail : https://njump.me/nevent1qqs908glhk68e7ms8zqtlsqd00wu3prnpt08dwre26hd6e5fhqdw99cppemhxue69uhkummn9ekx7mp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpeyhg4z
Tutonota : https://njump.me/nevent1qqswtzh9zjxfey644qy4jsdh9465qcqd2wefx0jxa54gdckxjvkrrmqpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs5hzhkv
k-9 mail : https://njump.me/nevent1qqs200g5a603y7utjgjk320r3srurrc4r66nv93mcg0x9umrw52ku5gpr3mhxue69uhkummnw3ezuumhd9ehxtt9de5kwmtp9e3kstczyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqgacflak
E-MAIL-ALIÁS
Simplelogin : https://njump.me/nevent1qqsvhz5pxqpqzr2ptanqyqgsjr50v7u9lc083fvdnglhrv36rnceppcppemhxue69uhkummn9ekx7mp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqp9gsr7m
AnonAddy : https://njump.me/nevent1qqs9mcth70mkq2z25ws634qfn7vx2mlva3tkllayxergw0s7p8d3ggcpzpmhxue69uhkummnw3ezumt0d5hsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs6mawe3
NAVEGADOR
Navegador Tor : https://njump.me/nevent1qqs06qfxy7wzqmk76l5d8vwyg6mvcye864xla5up52fy5sptcdy39lspzemhxue69uhkummnw3ezuerpw3sju6rpw4ej7q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzdp0urw
Mullvap Browser : https://njump.me/nevent1qqs2vsgc3wk09wdspv2mezltgg7nfdg97g0a0m5cmvkvr4nrfxluzfcpzdmhxue69uhhwmm59e6hg7r09ehkuef0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpj8h6fe
LibreWolf : https://njump.me/nevent1qqswv05mlmkcuvwhe8x3u5f0kgwzug7n2ltm68fr3j06xy9qalxwq2cpzemhxue69uhkummnw3ex2mrfw3jhxtn0wfnj7q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzuv2hxr
Cromite : https://njump.me/nevent1qqs2ut83arlu735xp8jf87w5m3vykl4lv5nwkhldkqwu3l86khzzy4cpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs3dplt7
BUSCADORES
Searx : https://njump.me/nevent1qqsxyzpvgzx00n50nrlgctmy497vkm2cm8dd5pdp7fmw6uh8xnxdmaspr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqp23z7ax
APP-STORE
Obtainium : https://njump.me/nevent1qqstd8kzc5w3t2v6dgf36z0qrruufzfgnc53rj88zcjgsagj5c5k4rgpz3mhxue69uhhyetvv9ujuerpd46hxtnfdupzqjagapkjm9ufdhynxlp72qrfrzfawvt4wt7cr795rhw6tkyaxt0yqvzqqqqqqyarmca3
F-Droid : https://njump.me/nevent1qqst4kry49cc9g3g8s5gdnpgyk3gjte079jdnv43f0x4e85cjkxzjesymzuu4
Droid-ify : https://njump.me/nevent1qqsrr8yu9luq0gud902erdh8gw2lfunpe93uc2u6g8rh9ep7wt3v4sgpzpmhxue69uhkummnw3ezumt0d5hsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqsfzu9vk
Aurora Store : https://njump.me/nevent1qqsy69kcaf0zkcg0qnu90mtk46ly3p2jplgpzgk62wzspjqjft4fpjgpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzrpmsjy
RSS
Feeder : https://njump.me/nevent1qqsy29aeggpkmrc7t3c7y7ldgda7pszl7c8hh9zux80gjzrfvlhfhwqpp4mhxue69uhkummn9ekx7mqzyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqgsvzzjy
VIDEOO CONFERENCIA
Jitsi meet : https://njump.me/nevent1qqswphw67hr6qmt2fpugcj77jrk7qkfdrszum7vw7n2cu6cx4r6sh4cgkderr
TECLADOS
HeliBoard : https://njump.me/nevent1qqsyqpc4d28rje03dcvshv4xserftahhpeylu2ez2jutdxwds4e8syspz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqsr8mel5
OpenBoard : https://njump.me/nevent1qqsf7zqkup03yysy67y43nj48q53sr6yym38es655fh9fp6nxpl7rqspzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqswcvh3r
FlorisBoard : https://njump.me/nevent1qqsf7zqkup03yysy67y43nj48q53sr6yym38es655fh9fp6nxpl7rqspzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqswcvh3r
MAPAS
Osmand : https://njump.me/nevent1qqsxryp2ywj64az7n5p6jq5tn3tx5jv05te48dtmmt3lf94ydtgy4fgpzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs54nwpj
Organic maps : https://njump.me/nevent1qqstrecuuzkw0dyusxdq7cuwju0ftskl7anx978s5dyn4pnldrkckzqpr4mhxue69uhkummnw3ezumtp0p5k6ctrd96xzer9dshx7un8qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpl8z3kk
TRADUÇÃO
LibreTranslate : https://njump.me/nevent1qqs953g3rhf0m8jh59204uskzz56em9xdrjkelv4wnkr07huk20442cpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzeqsx40
REMOÇÃO DOS METADADOS
Scrambled Exif : https://njump.me/nevent1qqs2658t702xv66p000y4mlhnvadmdxwzzfzcjkjf7kedrclr3ej7aspyfmhxue69uhk6atvw35hqmr90pjhytngw4eh5mmwv4nhjtnhdaexcep0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpguu0wh
ESTEGANOGRAFIA
PixelKnot: https://njump.me/nevent1qqsrh0yh9mg0lx86t5wcmhh97wm6n4v0radh6sd0554ugn354wqdj8gpz3mhxue69uhhyetvv9ujuerpd46hxtnfdupzqjagapkjm9ufdhynxlp72qrfrzfawvt4wt7cr795rhw6tkyaxt0yqvzqqqqqqyuvfqdp
PERFIL DE TRABALHO
Shelter : https://njump.me/nevent1qqspv9xxkmfp40cxgjuyfsyczndzmpnl83e7gugm7480mp9zhv50wkqpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzdnu59c
PDF
MuPDF : https://njump.me/nevent1qqspn5lhe0dteys6npsrntmv2g470st8kh8p7hxxgmymqa95ejvxvfcpzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs4hvhvj
Librera Reader : https://njump.me/nevent1qqsg60flpuf00sash48fexvwxkly2j5z9wjvjrzt883t3eqng293f3cpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqz39tt3n
QR-Code
Binary Eye : https://njump.me/nevent1qqsz4n0uxxx3q5m0r42n9key3hchtwyp73hgh8l958rtmae5u2khgpgpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzdmn4wp
Climático
Breezy Weather : https://njump.me/nevent1qqs9hjz5cz0y4am3kj33xn536uq85ydva775eqrml52mtnnpe898rzspzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqgpd3tu8
ENCRYPTS
Cryptomator : https://njump.me/nevent1qqsvchvnw779m20583llgg5nlu6ph5psewetlczfac5vgw83ydmfndspzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqsx7ppw9
VeraCrypt : https://njump.me/nevent1qqsf6wzedsnrgq6hjk5c4jj66dxnplqwc4ygr46l8z3gfh38q2fdlwgm65ej3
EXTENSÕES
uBlock Origin : https://njump.me/nevent1qqswaa666lcj2c4nhnea8u4agjtu4l8q89xjln0yrngj7ssh72ntwzql8ssdj
Snowflake : https://njump.me/nevent1qqs0ws74zlt8uced3p2vee9td8x7vln2mkacp8szdufvs2ed94ctnwchce008
CLOUD
Nextcloud : https://njump.me/nevent1qqs2utg5z9htegdtrnllreuhypkk2026x8a0xdsmfczg9wdl8rgrcgg9nhgnm
NOTEPAD
Joplin : https://njump.me/nevent1qqsz2a0laecpelsznser3xd0jfa6ch2vpxtkx6vm6qg24e78xttpk0cpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsyh28gd5ke0ztdeyehc0jsq6gcj0tnzatjlkql3dqamkja38fjmeqrqsqqqqqpdu0hft
Standard Notes : https://njump.me/nevent1qqsv3596kz3qung5v23cjc4cpq7rqxg08y36rmzgcrvw5whtme83y3s7tng6r
MÚSICA
RiMusic : https://njump.me/nevent1qqsv3genqav2tfjllp86ust4umxm8tr2wd9kq8x7vrjq6ssp363mn0gpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqg42353n
ViMusic : https://njump.me/nevent1qqswx78559l4jsxsrygd8kj32sch4qu57stxq0z6twwl450vp39pdqqpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzjg863j
PODCAST
AntennaPod : https://njump.me/nevent1qqsp4nh7k4a6zymfwqqdlxuz8ua6kdhvgeeh3uxf2c9rtp9u3e9ku8qnr8lmy
VISUALIZAR VIDEO
VLC : https://njump.me/nevent1qqs0lz56wtlr2eye4ajs2gzn2r0dscw4y66wezhx0mue6dffth8zugcl9laky
YOUTUBE
NewPipe : https://njump.me/nevent1qqsdg06qpcjdnlvgm4xzqdap0dgjrkjewhmh4j3v4mxdl4rjh8768mgdw9uln
FreeTube : https://njump.me/nevent1qqsz6y6z7ze5gs56s8seaws8v6m6j2zu0pxa955dhq3ythmexak38mcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqs5lkjvv
LibreTube : https://snort.social/e/nevent1qqstmd5m6wrdvn4gxf8xyhrwnlyaxmr89c9kjddvnvux6603f84t3fqpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqsswwznc
COMPARTILHAMENTO DE ARQUIVOS
OnionShare : https://njump.me/nevent1qqsr0a4ml5nu6ud5k9yzyawcd9arznnwkrc27dzzc95q6r50xmdff6qpydmhxue69uhkummnw3ez6an9wf5kv6t9vsh8wetvd3hhyer9wghxuet59uq3uamnwvaz7tmwdaehgu3dv3jhvtnhv4kxcmmjv3jhytnwv46z7qgswaehxw309ahx7tnnw3ezucmj9uq32amnwvaz7tmjv4kxz7fwv3sk6atn9e5k7tcpzamhxue69uhkyarr9e4kcetwv3sh5afwvdhk6tcpzemhxue69uhkummnw3ezucnrdqhxu6twdfsj7qgswaehxw309ahx7um5wghx6mmd9uqjgamnwvaz7tmwdaehgu3wwfhh2mnywfhkx6mzd96xxmmfdejhyuewvdhk6tcppemhxue69uhkummn9ekx7mp0qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qyv8wumn8ghj7un9d3shjtnrw4e8yetwwshxv7tf9ut7qurt
Localsend : https://njump.me/nevent1qqsp8ldjhrxm09cvvcak20hrc0g8qju9f67pw7rxr2y3euyggw9284gpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzuyghqr
Wallet Bitcoin
Ashigaru Wallet : https://njump.me/nevent1qqstx9fz8kf24wgl26un8usxwsqjvuec9f8q392llmga75tw0kfarfcpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyp9636rd9ktcjmwfxd7ru5qxjxyn6uch2uhas8utg8wa5hvf6vk7gqcyqqqqqqgvfsrqp
Samourai Wallet : https://njump.me/nevent1qqstcvjmz39rmrnrv7t5cl6p3x7pzj6jsspyh4s4vcwd2lugmre04ecpr9mhxue69uhkummnw3ezucn0denkymmwvuhxxmmd9upzqjagapkjm9ufdhynxlp72qrfrzfawvt4wt7cr795rhw6tkyaxt0yqvzqqqqqqy3rg4qs
CÂMERA
opencamera : https://njump.me/nevent1qqs25glp6dh0crrjutxrgdjlnx9gtqpjtrkg29hlf7382aeyjd77jlqpzpmhxue69uhkumewwd68ytnrwghsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqssxcvgc
OFFICE
Collabora Office : https://njump.me/nevent1qqs8yn4ys6adpmeu3edmf580jhc3wluvlf823cc4ft4h0uqmfzdf99qpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqsj40uss
TEXTOS
O manifesto de um Cypherpunk : https://njump.me/nevent1qqsd7hdlg6galn5mcuv3pm3ryfjxc4tkyph0cfqqe4du4dr4z8amqyspvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqzal0efa
Operations security ( OPSEC) : https://snort.social/e/nevent1qqsp323havh3y9nxzd4qmm60hw87tm9gjns0mtzg8y309uf9mv85cqcpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqz8ej9l7
O MANIFESTO CRIPTOANARQUISTA Timothy C. May – 1992. : https://njump.me/nevent1qqspp480wtyx2zhtwpu5gptrl8duv9rvq3mug85mp4d54qzywk3zq9gpvemhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0dec82c330g6x6dm8ddmxzdne0pnhverevdkxxdm6wqc8v735w3snquejvsuk56pcvuurxaesxd68qdtkv3nrx6m6v3ehsctwvym8q0mzwfhkzerrv9ehg0t5wf6k2q3qfw5wsmfdj7ykmjfn0sl9qp533y7hx96h9lvplz6pmhd9mzwn9hjqxpqqqqqqz5wq496
Declaração de independência do ciberespaço
- John Perry Barlow - 1996 : https://njump.me/nevent1qqs2njsy44n6p07mhgt2tnragvchasv386nf20ua5wklxqpttf6mzuqpzpmhxue69uhkummnw3ezumt0d5hsygzt4r5x6tvh39kujvmu8egqdyvf84e3w4e0mq0ckswamfwcn5eduspsgqqqqqqsukg4hr
The Cyphernomicon: Criptografia, Dinheiro Digital e o Futuro da Privacidade. escrito por Timothy C. May -Publicado em 1994. :
Livro completo em PDF no Github PrivacyOpenSource.
https://github.com/Alexemidio/PrivacyOpenSource/raw/main/Livros/THE%20CYPHERNOMICON%20.pdf Share
-
@ 09fbf8f3:fa3d60f0
2024-11-02 08:00:29> ### 第三方API合集:
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 | | --- | --- | --- | --- | | AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | 官网 | | OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | 官网 | | CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | 官网 | | OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | 官网 |
持续更新。。。
推广:
访问不了openai,去
低调云
购买VPN。官网:https://didiaocloud.xyz
邀请码:
w9AjVJit
价格低至1元。
-
@ ba3b4b1d:eadff0d3
2025-05-09 23:53:41Existem seres do tipo "troll" no folclore Português conhecidos como Trasgos, semelhantes a gnomos e duendes. Na interpretação da Vanessa Fidalgo, duende e trasgo seriam sinónimos e ela descreve-os como sendo "refilões, rebeldes, de pequena estatura, nariz arrebatado, tez rugosa, e o povo chama-lhes trasgos ou maruxinhos, que é como quem diz, duendes!"
Escondem-se nas florestas rurais e são famosos por pregar partidas aos viajantes, especialmente aos pastores de ovelhas. Essas criaturas anãs podem aparecer nas lareiras e tornar-se bastante invasivas. Mesmo que alguém se mude de casa, os Trasgos seguem-nos até ao novo lar. Existem rituais que, alegadamente, conseguem expulsar os Trasgos de uma casa.
A natureza jovial e traquina destes seres está relacionada com um Fadário Galego, em que se acreditava que eles eram resultado de crianças que morriam novas, em que a alma ficava presa a uma casa ou a um local agrícola relacionado com a família.
No folclore leonês, são chamados de Trasgos. No folclore castelhano e aragonês, são conhecidos como Martinitos.
Nas Astúrias, o trasgo é conhecido por nomes diferentes, dependendo da localização. Ele é conhecido como Trasno, Cornín ou Xuan Dos Camíos nas Astúrias ocidentais. Ele é conhecido como Gorretín Colorau ou aquele com o "gorro encarnado" (ambos significando "pequeno chapéu vermelho"), nas Astúrias orientais.
Na mitologia Celta, são conhecidos por "Far Darrig" que significa "homem vermelho".
Algumas interpretações sugerem que encarna a natureza caprichosa do destino, lembrando os humanos da imprevisibilidade da jornada da vida. Outras vêem-no como uma representação dos aspectos selvagens e indomáveis do mundo natural, existindo nas margens da compreensão humana.
-
@ ed5774ac:45611c5c
2025-04-19 20:29:31April 20, 2020: The day I saw my so-called friends expose themselves as gutless, brain-dead sheep.
On that day, I shared a video exposing the damning history of the Bill & Melinda Gates Foundation's vaccine campaigns in Africa and the developing world. As Gates was on every TV screen, shilling COVID jabs that didn’t even exist, I called out his blatant financial conflict of interest and pointed out the obvious in my facebook post: "Finally someone is able to explain why Bill Gates runs from TV to TV to promote vaccination. Not surprisingly, it's all about money again…" - referencing his substantial investments in vaccine technology, including BioNTech's mRNA platform that would later produce the COVID vaccines and generate massive profits for his so-called philanthropic foundation.
The conflict of interest was undeniable. I genuinely believed anyone capable of basic critical thinking would at least pause to consider these glaring financial motives. But what followed was a masterclass in human stupidity.
My facebook post from 20 April 2020:
Not only was I branded a 'conspiracy theorist' for daring to question the billionaire who stood to make a fortune off the very vaccines he was shilling, but the brain-dead, logic-free bullshit vomited by the people around me was beyond pathetic. These barely literate morons couldn’t spell "Pfizer" without auto-correct, yet they mindlessly swallowed and repeated every lie the media and government force-fed them, branding anything that cracked their fragile reality as "conspiracy theory." Big Pharma’s rap sheet—fraud, deadly cover-ups, billions in fines—could fill libraries, yet these obedient sheep didn’t bother to open a single book or read a single study before screaming their ignorance, desperate to virtue-signal their obedience. Then, like spineless lab rats, they lined up for an experimental jab rushed to the market in months, too dumb to care that proper vaccine development takes a decade.
The pathetic part is that these idiots spend hours obsessing over reviews for their useless purchases like shoes or socks, but won’t spare 60 seconds to research the experimental cocktail being injected into their veins—or even glance at the FDA’s own damning safety reports. Those same obedient sheep would read every Yelp review for a fucking coffee shop but won't spend five minutes looking up Pfizer's criminal fraud settlements. They would demand absolute obedience to ‘The Science™’—while being unable to define mRNA, explain lipid nanoparticles, or justify why trials were still running as they queued up like cattle for their jab. If they had two brain cells to rub together or spent 30 minutes actually researching, they'd know, but no—they'd rather suck down the narrative like good little slaves, too dumb to question, too weak to think.
Worst of all, they became the system’s attack dogs—not just swallowing the poison, but forcing it down others’ throats. This wasn’t ignorance. It was betrayal. They mutated into medical brownshirts, destroying lives to virtue-signal their obedience—even as their own children’s hearts swelled with inflammation.
One conversation still haunts me to this day—a masterclass in wealth-worship delusion. A close friend, as a response to my facebook post, insisted that Gates’ assumed reading list magically awards him vaccine expertise, while dismissing his billion-dollar investments in the same products as ‘no conflict of interest.’ Worse, he argued that Gates’s $5–10 billion pandemic windfall was ‘deserved.’
This exchange crystallizes civilization’s intellectual surrender: reason discarded with religious fervor, replaced by blind faith in corporate propaganda.
The comment of a friend on my facebook post that still haunts me to this day:
Walking Away from the Herd
After a period of anger and disillusionment, I made a decision: I would no longer waste energy arguing with people who refused to think for themselves. If my circle couldn’t even ask basic questions—like why an untested medical intervention was being pushed with unprecedented urgency—then I needed a new community.
Fortunately, I already knew where to look. For three years, I had been involved in Bitcoin, a space where skepticism wasn’t just tolerated—it was demanded. Here, I’d met some of the most principled and independent thinkers I’d ever encountered. These were people who understood the corrupting influence of centralized power—whether in money, media, or politics—and who valued sovereignty, skepticism, and integrity. Instead of blind trust, bitcoiners practiced relentless verification. And instead of empty rhetoric, they lived by a simple creed: Don’t trust. Verify.
It wasn’t just a philosophy. It was a lifeline. So I chose my side and I walked away from the herd.
Finding My Tribe
Over the next four years, I immersed myself in Bitcoin conferences, meetups, and spaces where ideas were tested, not parroted. Here, I encountered extraordinary people: not only did they share my skepticism toward broken systems, but they challenged me to sharpen it.
No longer adrift in a sea of mindless conformity, I’d found a crew of thinkers who cut through the noise. They saw clearly what most ignored—that at the core of society’s collapse lay broken money, the silent tax on time, freedom, and truth itself. But unlike the complainers I’d left behind, these people built. They coded. They wrote. They risked careers and reputations to expose the rot. Some faced censorship; others, mockery. All understood the stakes.
These weren’t keyboard philosophers. They were modern-day Cassandras, warning of inflation’s theft, the Fed’s lies, and the coming dollar collapse—not for clout, but because they refused to kneel to a dying regime. And in their defiance, I found something rare: a tribe that didn’t just believe in a freer future. They were engineering it.
April 20, 2024: No more herd. No more lies. Only proof-of-work.
On April 20, 2024, exactly four years after my last Facebook post, the one that severed my ties to the herd for good—I stood in front of Warsaw’s iconic Palace of Culture and Science, surrounded by 400 bitcoiners who felt like family. We were there to celebrate Bitcoin’s fourth halving, but it was more than a protocol milestone. It was a reunion of sovereign individuals. Some faces I’d known since the early days; others, I’d met only hours before. We bonded instantly—heated debates, roaring laughter, zero filters on truths or on so called conspiracy theories.
As the countdown to the halving began, it hit me: This was the antithesis of the hollow world I’d left behind. No performative outrage, no coerced consensus—just a room of unyielding minds who’d traded the illusion of safety for the grit of truth. Four years prior, I’d been alone in my resistance. Now, I raised my glass among my people - those who had seen the system's lies and chosen freedom instead. Each had their own story of awakening, their own battles fought, but here we shared the same hard-won truth.
The energy wasn’t just electric. It was alive—the kind that emerges when free people build rather than beg. For the first time, I didn’t just belong. I was home. And in that moment, the halving’s ticking clock mirrored my own journey: cyclical, predictable in its scarcity, revolutionary in its consequences. Four years had burned away the old world. What remained was stronger.
No Regrets
Leaving the herd wasn’t a choice—it was evolution. My soul shouted: "I’d rather stand alone than kneel with the masses!". The Bitcoin community became more than family; they’re living proof that the world still produces warriors, not sheep. Here, among those who forge truth, I found something extinct elsewhere: hope that burns brighter with every halving, every block, every defiant mind that joins the fight.
Change doesn’t come from the crowd. It starts when one person stops applauding.
Today, I stand exactly where I always wanted to be—shoulder-to-shoulder with my true family: the rebels, the builders, the ungovernable. Together, we’re building the decentralized future.
-
@ 4c48cf05:07f52b80
2024-10-30 01:03:42I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released The AI Index 2024 Annual Report.
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.
Full citation: Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
-
@ e3ba5e1a:5e433365
2025-04-15 11:03:15Prelude
I wrote this post differently than any of my others. It started with a discussion with AI on an OPSec-inspired review of separation of powers, and evolved into quite an exciting debate! I asked Grok to write up a summary in my overall writing style, which it got pretty well. I've decided to post it exactly as-is. Ultimately, I think there are two solid ideas driving my stance here:
- Perfect is the enemy of the good
- Failure is the crucible of success
Beyond that, just some hard-core belief in freedom, separation of powers, and operating from self-interest.
Intro
Alright, buckle up. I’ve been chewing on this idea for a while, and it’s time to spit it out. Let’s look at the U.S. government like I’d look at a codebase under a cybersecurity audit—OPSEC style, no fluff. Forget the endless debates about what politicians should do. That’s noise. I want to talk about what they can do, the raw powers baked into the system, and why we should stop pretending those powers are sacred. If there’s a hole, either patch it or exploit it. No half-measures. And yeah, I’m okay if the whole thing crashes a bit—failure’s a feature, not a bug.
The Filibuster: A Security Rule with No Teeth
You ever see a firewall rule that’s more theater than protection? That’s the Senate filibuster. Everyone acts like it’s this untouchable guardian of democracy, but here’s the deal: a simple majority can torch it any day. It’s not a law; it’s a Senate preference, like choosing tabs over spaces. When people call killing it the “nuclear option,” I roll my eyes. Nuclear? It’s a button labeled “press me.” If a party wants it gone, they’ll do it. So why the dance?
I say stop playing games. Get rid of the filibuster. If you’re one of those folks who thinks it’s the only thing saving us from tyranny, fine—push for a constitutional amendment to lock it in. That’s a real patch, not a Post-it note. Until then, it’s just a vulnerability begging to be exploited. Every time a party threatens to nuke it, they’re admitting it’s not essential. So let’s stop pretending and move on.
Supreme Court Packing: Because Nine’s Just a Number
Here’s another fun one: the Supreme Court. Nine justices, right? Sounds official. Except it’s not. The Constitution doesn’t say nine—it’s silent on the number. Congress could pass a law tomorrow to make it 15, 20, or 42 (hitchhiker’s reference, anyone?). Packing the court is always on the table, and both sides know it. It’s like a root exploit just sitting there, waiting for someone to log in.
So why not call the bluff? If you’re in power—say, Trump’s back in the game—say, “I’m packing the court unless we amend the Constitution to fix it at nine.” Force the issue. No more shadowboxing. And honestly? The court’s got way too much power anyway. It’s not supposed to be a super-legislature, but here we are, with justices’ ideologies driving the bus. That’s a bug, not a feature. If the court weren’t such a kingmaker, packing it wouldn’t even matter. Maybe we should be talking about clipping its wings instead of just its size.
The Executive Should Go Full Klingon
Let’s talk presidents. I’m not saying they should wear Klingon armor and start shouting “Qapla’!”—though, let’s be real, that’d be awesome. I’m saying the executive should use every scrap of power the Constitution hands them. Enforce the laws you agree with, sideline the ones you don’t. If Congress doesn’t like it, they’ve got tools: pass new laws, override vetoes, or—here’s the big one—cut the budget. That’s not chaos; that’s the system working as designed.
Right now, the real problem isn’t the president overreaching; it’s the bureaucracy. It’s like a daemon running in the background, eating CPU and ignoring the user. The president’s supposed to be the one steering, but the administrative state’s got its own agenda. Let the executive flex, push the limits, and force Congress to check it. Norms? Pfft. The Constitution’s the spec sheet—stick to it.
Let the System Crash
Here’s where I get a little spicy: I’m totally fine if the government grinds to a halt. Deadlock isn’t a disaster; it’s a feature. If the branches can’t agree, let the president veto, let Congress starve the budget, let enforcement stall. Don’t tell me about “essential services.” Nothing’s so critical it can’t take a breather. Shutdowns force everyone to the table—debate, compromise, or expose who’s dropping the ball. If the public loses trust? Good. They’ll vote out the clowns or live with the circus they elected.
Think of it like a server crash. Sometimes you need a hard reboot to clear the cruft. If voters keep picking the same bad admins, well, the country gets what it deserves. Failure’s the best teacher—way better than limping along on autopilot.
States Are the Real MVPs
If the feds fumble, states step up. Right now, states act like junior devs waiting for the lead engineer to sign off. Why? Federal money. It’s a leash, and it’s tight. Cut that cash, and states will remember they’re autonomous. Some will shine, others will tank—looking at you, California. And I’m okay with that. Let people flee to better-run states. No bailouts, no excuses. States are like competing startups: the good ones thrive, the bad ones pivot or die.
Could it get uneven? Sure. Some states might turn into sci-fi utopias while others look like a post-apocalyptic vidya game. That’s the point—competition sorts it out. Citizens can move, markets adjust, and failure’s a signal to fix your act.
Chaos Isn’t the Enemy
Yeah, this sounds messy. States ignoring federal law, external threats poking at our seams, maybe even a constitutional crisis. I’m not scared. The Supreme Court’s there to referee interstate fights, and Congress sets the rules for state-to-state play. But if it all falls apart? Still cool. States can sort it without a babysitter—it’ll be ugly, but freedom’s worth it. External enemies? They’ll either unify us or break us. If we can’t rally, we don’t deserve the win.
Centralizing power to avoid this is like rewriting your app in a single thread to prevent race conditions—sure, it’s simpler, but you’re begging for a deadlock. Decentralized chaos lets states experiment, lets people escape, lets markets breathe. States competing to cut regulations to attract businesses? That’s a race to the bottom for red tape, but a race to the top for innovation—workers might gripe, but they’ll push back, and the tension’s healthy. Bring it—let the cage match play out. The Constitution’s checks are enough if we stop coddling the system.
Why This Matters
I’m not pitching a utopia. I’m pitching a stress test. The U.S. isn’t a fragile porcelain doll; it’s a rugged piece of hardware built to take some hits. Let it fail a little—filibuster, court, feds, whatever. Patch the holes with amendments if you want, or lean into the grind. Either way, stop fearing the crash. It’s how we debug the republic.
So, what’s your take? Ready to let the system rumble, or got a better way to secure the code? Hit me up—I’m all ears.
-
@ ba3b4b1d:eadff0d3
2025-05-09 23:51:07Outro indício de que as relações entre Celtas e Ibéros não foi unilateral é a existência em solo Irlandês da planta Erica mackaiana, conhecida como Urze Galego ou Asturiano
Esta é uma espécie de urze encontrada exclusivamente nas costas da Galiza, Irlanda, Astúrias e Cantábria. Estas plantas estão sempre localizadas a menos de 10 km das margens, na proximidade de povoações costeiras da Idade do Ferro.
Não são nativas da Irlanda. Há indícios que sugerem que muitas espécies da fauna hiberno-lusitana chegaram à Irlanda devido a uma intensa actividade comercial exclusivamente marítima. Por exemplo, a Erica erigena, originalmente nativa da Gália e da Hispânia, é teoricamente considerada como tendo sido transplantada para solo irlandês através do comércio de vinho galaico.
Os indícios de contrabando de E. mackiana incluem o facto de serem encontradas apenas em portos que levam a rotas interiores e em baías isoladas. O local com a maior concentração desta espécie é o Pântano de Roundstone, situado entre duas enseadas isoladas, atravessado por uma antiga estrada de Connemara, uma área com uma longa história de contrabando. Em Kerry, há uma estrada conhecida como Trilho dos Contrabandistas, que foi a principal área de distribuição desta espécie.
-
@ 91bea5cd:1df4451c
2025-04-15 06:27:28Básico
bash lsblk # Lista todos os diretorios montados.
Para criar o sistema de arquivos:
bash mkfs.btrfs -L "ThePool" -f /dev/sdx
Criando um subvolume:
bash btrfs subvolume create SubVol
Montando Sistema de Arquivos:
bash mount -o compress=zlib,subvol=SubVol,autodefrag /dev/sdx /mnt
Lista os discos formatados no diretório:
bash btrfs filesystem show /mnt
Adiciona novo disco ao subvolume:
bash btrfs device add -f /dev/sdy /mnt
Lista novamente os discos do subvolume:
bash btrfs filesystem show /mnt
Exibe uso dos discos do subvolume:
bash btrfs filesystem df /mnt
Balancea os dados entre os discos sobre raid1:
bash btrfs filesystem balance start -dconvert=raid1 -mconvert=raid1 /mnt
Scrub é uma passagem por todos os dados e metadados do sistema de arquivos e verifica as somas de verificação. Se uma cópia válida estiver disponível (perfis de grupo de blocos replicados), a danificada será reparada. Todas as cópias dos perfis replicados são validadas.
iniciar o processo de depuração :
bash btrfs scrub start /mnt
ver o status do processo de depuração Btrfs em execução:
bash btrfs scrub status /mnt
ver o status do scrub Btrfs para cada um dos dispositivos
bash btrfs scrub status -d / data btrfs scrub cancel / data
Para retomar o processo de depuração do Btrfs que você cancelou ou pausou:
btrfs scrub resume / data
Listando os subvolumes:
bash btrfs subvolume list /Reports
Criando um instantâneo dos subvolumes:
Aqui, estamos criando um instantâneo de leitura e gravação chamado snap de marketing do subvolume de marketing.
bash btrfs subvolume snapshot /Reports/marketing /Reports/marketing-snap
Além disso, você pode criar um instantâneo somente leitura usando o sinalizador -r conforme mostrado. O marketing-rosnap é um instantâneo somente leitura do subvolume de marketing
bash btrfs subvolume snapshot -r /Reports/marketing /Reports/marketing-rosnap
Forçar a sincronização do sistema de arquivos usando o utilitário 'sync'
Para forçar a sincronização do sistema de arquivos, invoque a opção de sincronização conforme mostrado. Observe que o sistema de arquivos já deve estar montado para que o processo de sincronização continue com sucesso.
bash btrfs filsystem sync /Reports
Para excluir o dispositivo do sistema de arquivos, use o comando device delete conforme mostrado.
bash btrfs device delete /dev/sdc /Reports
Para sondar o status de um scrub, use o comando scrub status com a opção -dR .
bash btrfs scrub status -dR / Relatórios
Para cancelar a execução do scrub, use o comando scrub cancel .
bash $ sudo btrfs scrub cancel / Reports
Para retomar ou continuar com uma depuração interrompida anteriormente, execute o comando de cancelamento de depuração
bash sudo btrfs scrub resume /Reports
mostra o uso do dispositivo de armazenamento:
btrfs filesystem usage /data
Para distribuir os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID (incluindo o dispositivo de armazenamento recém-adicionado) montados no diretório /data , execute o seguinte comando:
sudo btrfs balance start --full-balance /data
Pode demorar um pouco para espalhar os dados, metadados e dados do sistema em todos os dispositivos de armazenamento do RAID se ele contiver muitos dados.
Opções importantes de montagem Btrfs
Nesta seção, vou explicar algumas das importantes opções de montagem do Btrfs. Então vamos começar.
As opções de montagem Btrfs mais importantes são:
**1. acl e noacl
**ACL gerencia permissões de usuários e grupos para os arquivos/diretórios do sistema de arquivos Btrfs.
A opção de montagem acl Btrfs habilita ACL. Para desabilitar a ACL, você pode usar a opção de montagem noacl .
Por padrão, a ACL está habilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem acl por padrão.
**2. autodefrag e noautodefrag
**Desfragmentar um sistema de arquivos Btrfs melhorará o desempenho do sistema de arquivos reduzindo a fragmentação de dados.
A opção de montagem autodefrag permite a desfragmentação automática do sistema de arquivos Btrfs.
A opção de montagem noautodefrag desativa a desfragmentação automática do sistema de arquivos Btrfs.
Por padrão, a desfragmentação automática está desabilitada. Portanto, o sistema de arquivos Btrfs usa a opção de montagem noautodefrag por padrão.
**3. compactar e compactar-forçar
**Controla a compactação de dados no nível do sistema de arquivos do sistema de arquivos Btrfs.
A opção compactar compacta apenas os arquivos que valem a pena compactar (se compactar o arquivo economizar espaço em disco).
A opção compress-force compacta todos os arquivos do sistema de arquivos Btrfs, mesmo que a compactação do arquivo aumente seu tamanho.
O sistema de arquivos Btrfs suporta muitos algoritmos de compactação e cada um dos algoritmos de compactação possui diferentes níveis de compactação.
Os algoritmos de compactação suportados pelo Btrfs são: lzo , zlib (nível 1 a 9) e zstd (nível 1 a 15).
Você pode especificar qual algoritmo de compactação usar para o sistema de arquivos Btrfs com uma das seguintes opções de montagem:
- compress=algoritmo:nível
- compress-force=algoritmo:nível
Para obter mais informações, consulte meu artigo Como habilitar a compactação do sistema de arquivos Btrfs .
**4. subvol e subvolid
**Estas opções de montagem são usadas para montar separadamente um subvolume específico de um sistema de arquivos Btrfs.
A opção de montagem subvol é usada para montar o subvolume de um sistema de arquivos Btrfs usando seu caminho relativo.
A opção de montagem subvolid é usada para montar o subvolume de um sistema de arquivos Btrfs usando o ID do subvolume.
Para obter mais informações, consulte meu artigo Como criar e montar subvolumes Btrfs .
**5. dispositivo
A opção de montagem de dispositivo** é usada no sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs.
Em alguns casos, o sistema operacional pode falhar ao detectar os dispositivos de armazenamento usados em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs. Nesses casos, você pode usar a opção de montagem do dispositivo para especificar os dispositivos que deseja usar para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar a opção de montagem de dispositivo várias vezes para carregar diferentes dispositivos de armazenamento para o sistema de arquivos de vários dispositivos Btrfs ou RAID.
Você pode usar o nome do dispositivo (ou seja, sdb , sdc ) ou UUID , UUID_SUB ou PARTUUID do dispositivo de armazenamento com a opção de montagem do dispositivo para identificar o dispositivo de armazenamento.
Por exemplo,
- dispositivo=/dev/sdb
- dispositivo=/dev/sdb,dispositivo=/dev/sdc
- dispositivo=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d
- device=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d,device=UUID_SUB=f7ce4875-0874-436a-b47d-3edef66d3424
**6. degraded
A opção de montagem degradada** permite que um RAID Btrfs seja montado com menos dispositivos de armazenamento do que o perfil RAID requer.
Por exemplo, o perfil raid1 requer a presença de 2 dispositivos de armazenamento. Se um dos dispositivos de armazenamento não estiver disponível em qualquer caso, você usa a opção de montagem degradada para montar o RAID mesmo que 1 de 2 dispositivos de armazenamento esteja disponível.
**7. commit
A opção commit** mount é usada para definir o intervalo (em segundos) dentro do qual os dados serão gravados no dispositivo de armazenamento.
O padrão é definido como 30 segundos.
Para definir o intervalo de confirmação para 15 segundos, você pode usar a opção de montagem commit=15 (digamos).
**8. ssd e nossd
A opção de montagem ssd** informa ao sistema de arquivos Btrfs que o sistema de arquivos está usando um dispositivo de armazenamento SSD, e o sistema de arquivos Btrfs faz a otimização SSD necessária.
A opção de montagem nossd desativa a otimização do SSD.
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem de SSD será habilitada. Caso contrário, a opção de montagem nossd é habilitada.
**9. ssd_spread e nossd_spread
A opção de montagem ssd_spread** tenta alocar grandes blocos contínuos de espaço não utilizado do SSD. Esse recurso melhora o desempenho de SSDs de baixo custo (baratos).
A opção de montagem nossd_spread desativa o recurso ssd_spread .
O sistema de arquivos Btrfs detecta automaticamente se um SSD é usado para o sistema de arquivos Btrfs. Se um SSD for usado, a opção de montagem ssd_spread será habilitada. Caso contrário, a opção de montagem nossd_spread é habilitada.
**10. descarte e nodiscard
Se você estiver usando um SSD que suporte TRIM enfileirado assíncrono (SATA rev3.1), a opção de montagem de descarte** permitirá o descarte de blocos de arquivos liberados. Isso melhorará o desempenho do SSD.
Se o SSD não suportar TRIM enfileirado assíncrono, a opção de montagem de descarte prejudicará o desempenho do SSD. Nesse caso, a opção de montagem nodiscard deve ser usada.
Por padrão, a opção de montagem nodiscard é usada.
**11. norecovery
Se a opção de montagem norecovery** for usada, o sistema de arquivos Btrfs não tentará executar a operação de recuperação de dados no momento da montagem.
**12. usebackuproot e nousebackuproot
Se a opção de montagem usebackuproot for usada, o sistema de arquivos Btrfs tentará recuperar qualquer raiz de árvore ruim/corrompida no momento da montagem. O sistema de arquivos Btrfs pode armazenar várias raízes de árvore no sistema de arquivos. A opção de montagem usebackuproot** procurará uma boa raiz de árvore e usará a primeira boa que encontrar.
A opção de montagem nousebackuproot não verificará ou recuperará raízes de árvore inválidas/corrompidas no momento da montagem. Este é o comportamento padrão do sistema de arquivos Btrfs.
**13. space_cache, space_cache=version, nospace_cache e clear_cache
A opção de montagem space_cache** é usada para controlar o cache de espaço livre. O cache de espaço livre é usado para melhorar o desempenho da leitura do espaço livre do grupo de blocos do sistema de arquivos Btrfs na memória (RAM).
O sistema de arquivos Btrfs suporta 2 versões do cache de espaço livre: v1 (padrão) e v2
O mecanismo de cache de espaço livre v2 melhora o desempenho de sistemas de arquivos grandes (tamanho de vários terabytes).
Você pode usar a opção de montagem space_cache=v1 para definir a v1 do cache de espaço livre e a opção de montagem space_cache=v2 para definir a v2 do cache de espaço livre.
A opção de montagem clear_cache é usada para limpar o cache de espaço livre.
Quando o cache de espaço livre v2 é criado, o cache deve ser limpo para criar um cache de espaço livre v1 .
Portanto, para usar o cache de espaço livre v1 após a criação do cache de espaço livre v2 , as opções de montagem clear_cache e space_cache=v1 devem ser combinadas: clear_cache,space_cache=v1
A opção de montagem nospace_cache é usada para desabilitar o cache de espaço livre.
Para desabilitar o cache de espaço livre após a criação do cache v1 ou v2 , as opções de montagem nospace_cache e clear_cache devem ser combinadas: clear_cache,nosapce_cache
**14. skip_balance
Por padrão, a operação de balanceamento interrompida/pausada de um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs será retomada automaticamente assim que o sistema de arquivos Btrfs for montado. Para desabilitar a retomada automática da operação de equilíbrio interrompido/pausado em um sistema de arquivos Btrfs de vários dispositivos ou RAID Btrfs, você pode usar a opção de montagem skip_balance .**
**15. datacow e nodatacow
A opção datacow** mount habilita o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs. É o comportamento padrão.
Se você deseja desabilitar o recurso Copy-on-Write (CoW) do sistema de arquivos Btrfs para os arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatacow .
**16. datasum e nodatasum
A opção datasum** mount habilita a soma de verificação de dados para arquivos recém-criados do sistema de arquivos Btrfs. Este é o comportamento padrão.
Se você não quiser que o sistema de arquivos Btrfs faça a soma de verificação dos dados dos arquivos recém-criados, monte o sistema de arquivos Btrfs com a opção de montagem nodatasum .
Perfis Btrfs
Um perfil Btrfs é usado para informar ao sistema de arquivos Btrfs quantas cópias dos dados/metadados devem ser mantidas e quais níveis de RAID devem ser usados para os dados/metadados. O sistema de arquivos Btrfs contém muitos perfis. Entendê-los o ajudará a configurar um RAID Btrfs da maneira que você deseja.
Os perfis Btrfs disponíveis são os seguintes:
single : Se o perfil único for usado para os dados/metadados, apenas uma cópia dos dados/metadados será armazenada no sistema de arquivos, mesmo se você adicionar vários dispositivos de armazenamento ao sistema de arquivos. Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
dup : Se o perfil dup for usado para os dados/metadados, cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos manterá duas cópias dos dados/metadados. Assim, 50% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser utilizado.
raid0 : No perfil raid0 , os dados/metadados serão divididos igualmente em todos os dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, não haverá dados/metadados redundantes (duplicados). Assim, 100% do espaço em disco de cada um dos dispositivos de armazenamento adicionados ao sistema de arquivos pode ser usado. Se, em qualquer caso, um dos dispositivos de armazenamento falhar, todo o sistema de arquivos será corrompido. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid0 .
raid1 : No perfil raid1 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a uma falha de unidade. Mas você pode usar apenas 50% do espaço total em disco. Você precisará de pelo menos dois dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1 .
raid1c3 : No perfil raid1c3 , três cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a duas falhas de unidade, mas você pode usar apenas 33% do espaço total em disco. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c3 .
raid1c4 : No perfil raid1c4 , quatro cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos. Nesta configuração, a matriz RAID pode sobreviver a três falhas de unidade, mas você pode usar apenas 25% do espaço total em disco. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid1c4 .
raid10 : No perfil raid10 , duas cópias dos dados/metadados serão armazenadas nos dispositivos de armazenamento adicionados ao sistema de arquivos, como no perfil raid1 . Além disso, os dados/metadados serão divididos entre os dispositivos de armazenamento, como no perfil raid0 .
O perfil raid10 é um híbrido dos perfis raid1 e raid0 . Alguns dos dispositivos de armazenamento formam arrays raid1 e alguns desses arrays raid1 são usados para formar um array raid0 . Em uma configuração raid10 , o sistema de arquivos pode sobreviver a uma única falha de unidade em cada uma das matrizes raid1 .
Você pode usar 50% do espaço total em disco na configuração raid10 . Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid10 .
raid5 : No perfil raid5 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Uma única paridade será calculada e distribuída entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid5 , o sistema de arquivos pode sobreviver a uma única falha de unidade. Se uma unidade falhar, você pode adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir da paridade distribuída das unidades em execução.
Você pode usar 1 00x(N-1)/N % do total de espaços em disco na configuração raid5 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos três dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid5 .
raid6 : No perfil raid6 , uma cópia dos dados/metadados será dividida entre os dispositivos de armazenamento. Duas paridades serão calculadas e distribuídas entre os dispositivos de armazenamento do array RAID.
Em uma configuração raid6 , o sistema de arquivos pode sobreviver a duas falhas de unidade ao mesmo tempo. Se uma unidade falhar, você poderá adicionar uma nova unidade ao sistema de arquivos e os dados perdidos serão calculados a partir das duas paridades distribuídas das unidades em execução.
Você pode usar 100x(N-2)/N % do espaço total em disco na configuração raid6 . Aqui, N é o número de dispositivos de armazenamento adicionados ao sistema de arquivos. Você precisará de pelo menos quatro dispositivos de armazenamento para configurar o sistema de arquivos Btrfs no perfil raid6 .
-
@ 6e0ea5d6:0327f353
2025-04-19 15:09:18🩸
The world won’t stop and wait for you to recover.Do your duty regardless of how you feel. That’s the only guarantee you’ll end the day alright.
You’ve heard it before: “The worst workout is the one you didn’t do.” Sometimes you don’t feel like going to the gym. You start bargaining with laziness: “I didn’t sleep well… maybe I should skip today.” But then you go anyway, committing only to the bare minimum your energy allows. And once you start, your body outperforms your mind’s assumptions—it turns out to be one of the best workouts you’ve had in a long time. The feeling of following through, of winning a battle you were losing, gives you the confidence to own the rest of your day. You finally feel good.
And that wouldn’t have happened if you stayed home waiting to feel better. Guilt would’ve joined forces with discouragement, and you’d be crushed by melancholy in a victim mindset. That loss would bleed into the rest of your week, conditioning your mind: because you didn’t spend your energy on the workout, you’d stay up late, wake up worse, and while waiting to feel “ready,” you’d lose a habit that took months of effort to build.
When in doubt, just do your duty. Stick to the plan. Don’t negotiate with your feelings—outsmart them. “Just one page today,” and you’ll end up reading ten. “Only the easy tasks,” and you’ll gain momentum to conquer the hard ones. Laziness is a serpent—you win when you make no deals with it.
A close friend once told me that when he was at his limit during a second job shift, he’d open a picture on his phone—of a fridge or a stove he needed to buy for his home—and that image gave him strength to stay awake. That moment stuck with me forever.
Do you really think the world will have the same mercy on you that you have on yourself? Don’t be surprised when it doesn’t spare you. Move forward even while stitching your wounds: “If you wait for perfect conditions, you’ll never do anything.” (Ecclesiastes 11:4)
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ 91bea5cd:1df4451c
2025-04-15 06:23:35Um bom gerenciamento de senhas deve ser simples e seguir a filosofia do Unix. Organizado em hierarquia e fácil de passar de um computador para outro.
E por isso não é recomendável o uso de aplicativos de terceiros que tenham acesso a suas chaves(senhas) em seus servidores, tampouco as opções nativas dos navegadores, que também pertencem a grandes empresas que fazem um grande esforço para ter acesso a nossas informações.
Recomendação
- pass
- Qtpass (gerenciador gráfico)
Com ele seus dados são criptografados usando sua chave gpg e salvo em arquivos organizados por pastas de forma hierárquica, podendo ser integrado a um serviço git de sua escolha ou copiado facilmente de um local para outro.
Uso
O seu uso é bem simples.
Configuração:
pass git init
Para ver:
pass Email/example.com
Copiar para área de transferência (exige xclip):
pass -c Email/example.com
Para inserir:
pass insert Email/example0.com
Para inserir e gerar senha:
pass generate Email/example1.com
Para inserir e gerar senha sem símbolos:
pass generate --no-symbols Email/example1.com
Para inserir, gerar senha e copiar para área de transferência :
pass generate -c Email/example1.com
Para remover:
pass rm Email/example.com
-
@ 6538925e:571e55c3
2025-05-05 20:00:48It’s been a little while since we released a major design update, so we’re really excited to get this new version of the app into your hands. Here’s a breakdown of all the main updates included in Fountain 1.2:
#### Library Design Update
-
New content-type filters at the top of the page make it easier to navigate between podcasts and music in your library.
-
Recently Played is now the default view in your library, so it’s easier to jump back into podcasts you’ve already started.
-
The Music filter now makes it easier to find saved tracks and albums, and it also gives you a list of all the artists whose music you’ve saved.
-
We’ve refreshed the design of the content cards to make it easier to see how much time is remaining on episodes you’ve already started.
#### Content Pages Design Update
-
All of the different content pages have undergone an extensive redesign, including shows, episodes, artists, albums, tracks, clips and playlists
-
We’ve replaced the tab layout we were using on the content pages with one scrollable page, making it easier to access features like chapters and tracklists
-
We’ve sanitised the formatting of show notes too, and if there is no activity for a given episode, we now display the expanded show notes
#### Episode Summaries
Ever looked at a 4-hour Lex Fridman episode and wished you could just read a high-level summary? We certainly have, so we did something about it.
-
Every episode page now has a Summary button above the show notes.
-
Simply pay 500 sats to unlock a summary, or upgrade to Fountain Premium for $2.99/month to enjoy unlimited summaries.
-
Summaries and transcripts now come as a bundle — two for the price of one!
-
Thanks to major improvements, they’re now faster, cheaper, and more accurate than ever before.
#### Playback Improvements
We’ve completely rebuilt our audio engine from the ground up. Playback is now more robust and reliable — especially for music. Here are some of the key enhancements in Fountain 1.2:
-
Tracks now load and play instantly when tapped.
-
When playing a collection of tracks (e.g. from an artist, album, or playlist), you can now skip seamlessly between them.
-
We’ve replaced the scrollable player page with full-screen modals to make it easier to access show notes, comments, transcripts, chapters, tracklists, and your queue.
-
The new Smart Resume feature rewinds the episode by 5 seconds when you hit pause, so you don’t miss a beat.
-
You can now skip forward or backward by 60 seconds for faster navigation through episodes.
Other Bug Fixes & Improvements
-
Rebuilt payment stats for more complete and reliable transaction records.
-
Refreshed the design of the Settings pages for better usability.
-
Added new episode notification preferences in Settings.
-
Fixed several playback issues that were causing crashes or freezes.
-
Updated lock screen display and controls for livestreams.
-
Fixed issue where the next item in the queue paused unexpectedly.
-
Resolved playback stuttering on Android during livestreams.
-
Fixed disappearing playback controls on the lock screen.
-
Fixed playback speed not updating correctly.
-
Resolved issue where played episodes couldn’t be replayed.
-
Fixed playback not resuming correctly when listening in the car.
-
Synced car playback position with the device.
-
Fixed persistent car display refresh issue.
-
Fixed volume control via car controls.
-
Resolved issue with headphone controls after playing a transcript.
-
Fixed disappearing metadata on the lock screen.
-
Fixed bug where downloaded episodes stopped in airplane mode but showed as playing.
We would love to hear how you’re finding Fountain 1.2. Please submit your thoughts and feedback via the main menu in the app and we will take it on board as we continue to improve the app.
If you want to help test new features out before they get released, you can join Fountain Beta on Telegram. All iOS and Android users welcome.
-
-
@ 6e0ea5d6:0327f353
2025-04-19 15:02:55My friend, let yourself be deluded for a moment, and reality will see to it that your fantasy is shattered—like a hammer crushing marble. The real world grants no mercy; it will relentlessly tear down your aspirations, casting them into the abyss of disillusionment and burying your dreams under the unbearable weight of your own expectations. It’s an inescapable fate—but the outcome is still in your hands: perish at the bottom like a wretch or turn the pit into a trench.
Davvero, everyone must eventually face something that breaks them. It is in devastation that man discovers what he is made of, and in the silence of defeat that he hears the finest advice. Yet the weak would rather embrace the convenient lie of self-pity, blaming life for failures that are, in truth, the result of their own negligence and cowardly choices. If you hide behind excuses because you fear the painful truth, know this: the responsibility has always been yours.
Ascolta bene! Just remain steadfast, even when everything feels like an endless maze. The difficulties you face today—those you believe you’ll never overcome—will one day seem insignificant under the light of time and experience. Tomorrow, you’ll look back and laugh at yourself for ever letting these storms seem so overwhelming.
Now, it’s up to you to fight your own battle—for the evil day spares no one. Don’t let yourself be paralyzed by shock or bow before adversity. Be strong and of good courage—not as one who waits for relief, but as one prepared to face the inevitable and turn pain into glory.
Thank you for reading, my friend!
If this message resonated with you, consider leaving your "🥃" as a token of appreciation.
A toast to our family!
-
@ ba3b4b1d:eadff0d3
2025-05-09 23:49:24Aisling é um género literário que se tornou popular na Irlanda nos séculos XVII e XVIII depois do tumulto da Guerra Williamita, misturando elementos de folclore, nacionalismo e romantismo. Ela encapsulava um anseio por identidade e liberdade, utilizando a figura da mulher bela como um poderoso símbolo de esperança e aspiração.
Mas a história da Aisling não começa aí e uma das suas origens é o conto muito confuso de Caer Ibormeith, também conhecida por Coerabar Boeth.
(https://yakihonne.s3.ap-east-1.amazonaws.com/ba3b4b1d1c5cc2707f71924368a7adcded1829921c86e022f1cb8117eadff0d3/files/1746834499215-YAKIHONNES3.jpg)
Ela era filha de Etal Anbuail, um rei Feérico dos Tuatha Dé Danann "povos da deusa Danu"; uma filidh - um nome que mais tarde foi atribuido aos poetas da Irlanda com proezas proféticas, uma vez que Ibormeith era a deusa das profecias, de certa forma semelhante a um "bardo", como se diz em Português. A cada Samhain, 31 de Outubro, quando o Mundo dos Vivos se aproximava do Mundo dos Mortos, ela transformava-se num cisne, até ao Samhain seguinte em que regressava à sua forma humana. Numa das versões desse conto, Ibormeith, após consulta ou por iniciativa própria, apaixona-se por Óengus. Numa noite, ela aparece nos sonhos dele tocando a mais bela melodia e quando ele acorda sabe que está apaixonado mas não sabia quem era aquela bela mulher nem onde a poderia encontrar. Ele fica completamente down bad e cada dia que passa só pensa em voltar sonhar com ela outra vez, e ela todas as noites volta a aparecer-lhe no Mundos dos Sonhos. Um dia, Óengus perde o apetite e adoece. Um médico foi chamado. Ele adivinhou a causa da doença de Óengus e disse a Bóann, mãe de Óengus, para encontrar a jovem dos sonhos dele. Bóann procurou durante um ano, mas sem sucesso. Então, eles pediram ajuda ao seu pai, o Dagdha, que também procurou um ano mas sem sucesso. Então foi pedido ao seu irmão Bodb que encontrasse um lugar com a descrição segundo os sonhos. Mas o pai dela, Etal, não permitia que qualquer um ficasse com a mão da sua filha e então revelou que para a conquistar teria que esperar pelo Loch Bél Dragain que significa "Lago da Boca do Dragão" para a encontrar na manhã do Samhain e a chamasse pelo nome. Nesse dia havia 150 cisnes* nesse lago, carregando correntes de prata e Óengus começou a panicar por não encontrar a sua Ibormeith, mas foi então que reparou num cisne mais bonito que os outros, com correntes de ouro e prata, e quando disse o seu nome ela revelou a sua forma humana e perguntou-lhe porque demorou tanto tempo.
Depois disse transformaram-se em cisnes e cantaram uma música tão bela que os humanos dormiram durante 3 dias e Óengus ficou conhecido como deus do amor e da juventude.
*Na mitologia irlandesa, o número 150 refere-se frequentemente aos Cinquenta Guerreiros dos Tuatha Dé Danann e a figura dos cisnes é frequente e também utilizada nas obras de Hans Christian Andersen e Tchaikovsky. Outras versões do conto não dizem cisnes, dizem aves, ou a história acontece de maneira ligeiramente diferente - quem conta um conto acrescenta um ponto.![image]
-
@ 2b24a1fa:17750f64
2025-05-09 19:39:46"Kleines Erste-Hilfe-Büchlein gegen Propaganda" von 2023. „Normalerweise hört man das Wort „Propaganda“ im Mainstream-Diskurs nur, wenn es um Dinge geht, die andere Länder ihren eigenen Bürgern antun oder Teil ausländischer Beeinflussungsoperationen sind, obwohl in der überwältigenden Mehrheit der Fälle, in denen wir in unserem täglichen Leben Propaganda erlebt haben, der Anruf aus dem eigenen Haus kam.“
https://soundcloud.com/radiomuenchen/helge-buttkereit-medienkritik-als-trotziger-idealismus?
In der Folge dokumentiert sie einige gravierende Beispiele aus den US-Medien. Auch in Deutschland fehlt es den Massenmedien an der nötigen Distanz zur Regierung und ihrem Werte-Narrativ. Was umso gefährlicher ist, da sie über „Definitionsmacht“ verfügen. Auch wenn wir selbst diese Medien nicht konsumieren, so können wir doch sicher sein, dass die große Mehrheit der Bürger sie konsultiert und damit deren Sicht auf die Welt massiv beeinflusst wird. „Wenn wir die Tagesschau einschalten, erfahren wir nichts über die ‘Wirklichkeit’. Wir lernen vielmehr, wer es geschafft hat, seine Sicht auf die Wirklichkeit in die Propaganda-Matrix einzuschreiben“, verrät der Medienwissenschaftler Michael Meyen in seinem Buch "Die Propaganda-Matrix". Nun gilt jedoch die Presse in der öffentlichen Wahrnehmung als „Vierte Gewalt“ der Demokratie. Ihre Aufgabe wäre es demnach, die zunehmend verflochtenen Staatsgewalten Legislative, Exekutive und Judikative zu kontrollieren. Die Realität sieht anders aus. Dabei übersieht selbst die immer häufiger auftretende Medienkritik, dass die „Vierte Gewalt“ von den Herrschenden nie in dieser Funktion gedacht war – schließlich waren und sind die Medien seit je her in den Händen von Regierungen und Konzernen, schreibt unser Autor Helge Buttkereit. Hören Sie seinen zweiteiligen Beitrag „Medienkritik als trotziger Idealismus“, der zunächst beim Magazin Multipolar erschienen war: multipolar-magazin.de/artikel/medien…tik-idealismus
Sprecher: Karsten Troyke
Bild-Collage: Radio München
-
@ 3ffac3a6:2d656657
2025-04-15 14:49:31🏅 Como Criar um Badge Épico no Nostr com
nak
+ badges.pageRequisitos:
- Ter o
nak
instalado (https://github.com/fiatjaf/nak) - Ter uma chave privada Nostr (
nsec...
) - Acesso ao site https://badges.page
- Um relay ativo (ex:
wss://relay.primal.net
)
🔧 Passo 1 — Criar o badge em badges.page
- Acesse o site https://badges.page
-
Clique em "New Badge" no canto superior direito
-
Preencha os campos:
- Nome (ex:
Teste Épico
) - Descrição
-
Imagem e thumbnail
-
Após criar, você será redirecionado para a página do badge.
🔍 Passo 2 — Copiar o
naddr
do badgeNa barra de endereços, copie o identificador que aparece após
/a/
— este é o naddr do seu badge.Exemplo:
nostr:naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
Copie:
naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
🧠 Passo 3 — Decodificar o naddr com
nak
Abra seu terminal (ou Cygwin no Windows) e rode:
bash nak decode naddr1qq94getnw3jj63tsd93k7q3q8lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vetsxpqqqp6njkq3sd0
Você verá algo assim:
json { "pubkey": "3ffac3a6c859eaaa8cdddb2c7002a6e10b33efeb92d025b14ead6f8a2d656657", "kind": 30009, "identifier": "Teste-Epico" }
Grave o campo
"identifier"
— nesse caso: Teste-Epico
🛰️ Passo 4 — Consultar o evento no relay
Agora vamos pegar o evento do badge no relay:
bash nak req -d "Teste-Epico" wss://relay.primal.net
Você verá o conteúdo completo do evento do badge, algo assim:
json { "kind": 30009, "tags": [["d", "Teste-Epico"], ["name", "Teste Épico"], ...] }
💥 Passo 5 — Minerar o evento como "épico" (PoW 31)
Agora vem a mágica: minerar com proof-of-work (PoW 31) para que o badge seja classificado como épico!
bash nak req -d "Teste-Epico" wss://relay.primal.net | nak event --pow 31 --sec nsec1SEU_NSEC_AQUI wss://relay.primal.net wss://nos.lol wss://relay.damus.io
Esse comando: - Resgata o evento original - Gera um novo com PoW de dificuldade 31 - Assina com sua chave privada
nsec
- E publica nos relays wss://relay.primal.net, wss://nos.lol e wss://relay.damus.io⚠️ Substitua
nsec1SEU_NSEC_AQUI
pela sua chave privada Nostr.
✅ Resultado
Se tudo der certo, o badge será atualizado com um evento de PoW mais alto e aparecerá como "Epic" no site!
- Ter o
-
@ 2b24a1fa:17750f64
2025-05-09 19:29:22Herzlichen Glückwunsch zu einem weiteren denkwürdigen Europa-Tag, den wir mit einem Manifest des european-peace-projects begehen. Europaweit beteiligen sich heute Institutionen, Vereine und Bürger, die um 17 Uhr die Fenster öffnen und dieses Manifest laut verlesen. Wer das nicht möchte oder kann, schaltet vielleicht Radio München ein:
\ https://soundcloud.com/radiomuenchen/european-peace-project-das-manifest-heute-17-uhr?
-
@ 51faaa77:2c26615b
2025-05-04 17:52:33There has been a lot of debate about a recent discussion on the mailing list and a pull request on the Bitcoin Core repository. The main two points are about whether a mempool policy regarding OP_RETURN outputs should be changed, and whether there should be a configuration option for node operators to set their own limit. There has been some controversy about the background and context of these topics and people are looking for more information. Please ask short (preferably one sentence) questions as top comments in this topic. @Murch, and maybe others, will try to answer them in a couple sentences. @Murch and myself have collected a few questions that we have seen being asked to start us off, but please add more as you see fit.
originally posted at https://stacker.news/items/971277
-
@ dc4cd086:cee77c06
2024-10-18 04:08:33Have you ever wanted to learn from lengthy educational videos but found it challenging to navigate through hours of content? Our new tool addresses this problem by transforming long-form video lectures into easily digestible, searchable content.
Key Features:
Video Processing:
- Automatically downloads YouTube videos, transcripts, and chapter information
- Splits transcripts into sections based on video chapters
Content Summarization:
- Utilizes language models to transform spoken content into clear, readable text
- Formats output in AsciiDoc for improved readability and navigation
- Highlights key terms and concepts with [[term]] notation for potential cross-referencing
Diagram Extraction:
- Analyzes video entropy to identify static diagram/slide sections
- Provides a user-friendly GUI for manual selection of relevant time ranges
- Allows users to pick representative frames from selected ranges
Going Forward:
Currently undergoing a rewrite to improve organization and functionality, but you are welcome to try the current version, though it might not work on every machine. Will support multiple open and closed language models for user choice Free and open-source, allowing for personal customization and integration with various knowledge bases. Just because we might not have it on our official Alexandria knowledge base, you are still welcome to use it on you own personal or community knowledge bases! We want to help find connections between ideas that exist across relays, allowing individuals and groups to mix and match knowledge bases between each other, allowing for any degree of openness you care.
While designed with #Alexandria users in mind, it's available for anyone to use and adapt to their own learning needs.
Screenshots
Frame Selection
This is a screenshot of the frame selection interface. You'll see a signal that represents frame entropy over time. The vertical lines indicate the start and end of a chapter. Within these chapters you can select the frames by clicking and dragging the mouse over the desired range where you think diagram is in that chapter. At the bottom is an option that tells the program to select a specific number of frames from that selection.
Diagram Extraction
This is a screenshot of the diagram extraction interface. For every selection you've made, there will be a set of frames that you can choose from. You can select and deselect as many frames as you'd like to save.
Links
- repo: https://github.com/limina1/video_article_converter
- Nostr Apps 101: https://www.youtube.com/watch?v=Flxa_jkErqE
Output
And now, we have a demonstration of the final result of this tool, with some quick cleaning up. The video we will be using this tool on is titled Nostr Apps 101 by nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 during Nostrasia. The following thread is an analog to the modular articles we are constructing for Alexandria, and I hope it conveys the functionality we want to create in the knowledge space. Note, this tool is the first step! You could use a different prompt that is most appropriate for the specific context of the transcript you are working with, but you can also manually clean up any discrepancies that don't portray the video accurately.
nostr:nevent1qvzqqqqqqypzp5r5hd579v2sszvvzfel677c8dxgxm3skl773sujlsuft64c44ncqy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgewaehxw309aex2mrp0yh8xmn0wf6zuum0vd5kzmp0qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgc7hglpn
Or view on Coracle nostr:nevent1qqsxunmjy20mvlq37vnrcshkf6sdrtkfjtjz3anuetmcuv8jswhezgcppemhxue69uhkummn9ekx7mp0qgsdqa9md83tz5yqnrqjw07hhkpmfjpkuv9hlh5v8yhu8z274w9dv7qnnq0s3
-
@ 8947a945:9bfcf626
2024-10-17 08:06:55สวัสดีทุกคนบน Nostr ครับ รวมไปถึง watchersและ ผู้ติดตามของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
เมื่อวันที่ 16 กย.2024 มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint 0.15 - 0.2 ETH … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
-
Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
-
ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
- ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
- ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
- ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT 0.2 ETH
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ เขา : 2ETH นี่มัน 5000$ เลยนะ ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้ เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย เขา : จะเอา address เขาไปทำไม ผม : ตัดจบ รำคาญ ไม่ขายให้ละ เขา : 2ETH = 5000 USD เลยนะ ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH "code is law" smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen - KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
@ b2caa9b3:9eab0fb5
2025-05-04 08:20:46Hey friends,
Exciting news – I’m currently setting up my very first Discord server!
This space will be all about my travels, behind-the-scenes stories, photo sharing, and practical tips and insights from the road. My goal is to make it the central hub connecting all my decentralized social platforms where I can interact with you more directly, and share exclusive content.
Since I’m just starting out, I’d love to hear from you:
Do you know any useful RSS-feed integrations for updates?
Can you recommend any cool Discord bots for community engagement or automation?
Are there any tips or features you think I must include?
The idea is to keep everything free and accessible, and to grow a warm, helpful community around the joy of exploring the world.
It’s my first time managing a Discord server, so your experience and suggestions would mean a lot. Leave a comment – I’m all ears!
Thanks for your support, Ruben Storm
-
@ bbef5093:71228592
2025-05-09 17:01:36A francia székhelyű Newcleo, az olasz dizájncég Pininfarina és az olasz hajógyártó Fincantieri bemutatták a TL-40 nevű, következő generációs, hajókra szánt nukleáris reaktor teljes méretű modelljét a 19. Velencei Építészeti Biennálén[1][2][5][6][7].
A TL-40 reaktor főbb jellemzői
- IV. generációs, ólom-hűtésű kis moduláris reaktor (SMR), amelyet nagy hajók meghajtására fejlesztettek ki[1][2][5][6].
- Kompakt, biztonságos és fenntartható: A reaktor passzív biztonsági rendszereket alkalmaz, amelyek a fizika törvényein keresztül kizárják a nukleáris balesetek lehetőségét[2][5][6].
- Nukleáris hulladék újrahasznosítása: A TL-40 képes újrafeldolgozott nukleáris hulladékot üzemanyagként használni, ezzel egyszerre járul hozzá az energiatermeléshez és a hulladékcsökkentéshez[1][2][5][6].
- Formatervezés és ipari háttér: A Pininfarina gondoskodott a dizájnról és a reaktor esztétikai megjelenéséről, míg a Fincantieri a hajógyártási tapasztalatát adta hozzá[1][2][5][6][7].
- Nyitott konfiguráció: A reaktor vázája lehetővé teszi a látogatók számára, hogy belülről is megismerjék a technológiát, interaktív módon[1][2][5].
Technológiai és környezeti jelentőség
- Dekarbonizáció: A fejlesztés célja, hogy a nukleáris energiát a hajózás szektorában is a szén-dioxid-kibocsátás csökkentésének eszközeként alkalmazzák[1][2][5][6].
- Európai hulladék hasznosítása: A jelenleg Európában tárolt nukleáris hulladék elegendő lenne a kontinens villamosenergia-igényének fedezésére évszázadokon át, ha ilyen típusú reaktorokban hasznosítanák[2][5][6].
- Kreatív szemlélet: A projekt célja a nukleáris energia társadalmi megítélésének javítása, bemutatva, hogy az lehet egyszerre biztonságos, esztétikus és fenntartható[1][5][6][7].
Együttműködés és jövőbeli tervek
- Folyamatos együttműködés: A Newcleo és a Fincantieri 2023 óta dolgoznak együtt a tengeri alkalmazások fejlesztésén[2][5][6][7].
- Kutatás-fejlesztés: A Pininfarina a Newcleo franciaországi (Chusclan) nukleáris üzemanyag kutató-fejlesztő központjának tervezésében is részt vesz[2][5][6][7].
- Első demonstrációs reaktor: A Newcleo célja, hogy 2031-re bemutassa az első demonstrációs reaktort, 2033-ra pedig kereskedelmi forgalomba hozza azt[8].
- Földi alkalmazás: A vállalat már megkezdte a földi, ólom-hűtésű LFR-AS-30 reaktor telephelyének előkészítését Franciaországban (Chinon régió)[8].
- Európai támogatás: Az LFR dizájn 2024 októberében felkerült az Európai Ipari Szövetség támogatásra javasolt SMR-listájára[8].
Idézetek a projekt vezetőitől
„Ez nem a nagyapáink reaktora. Fenntartható, előremutató technológia, amely a való világ és a klímaváltozás kihívásaira készült.”
(Stefano Buono, a Newcleo vezérigazgatója)[1][5][6][7]„A dizájn stratégiai szerepet játszik a fenntartható energia megoldások fejlesztésében, biztosítva a társadalmi és környezeti szempontok teljes integrációját az innovációs folyamat során.”
(Silvio Angori, a Pininfarina vezérigazgatója)[6][7]Hivatkozások: [1] newcleo, Pininfarina and Fincantieri unveil the future of clean ... https://www.maddyness.com/uk/2025/05/08/newcleo-pininfarina-and-fincantieri-unveil-the-future-of-clean-energy-at-the-venice-biennale/ [2] Newcleo, Pininfarina, and Fincantieri showcase next-generation ... https://www.shippax.com/en/news/newcleo-pininfarina-and-fincantieri-showcase-next-generation-nuclear-power-solution-for-potential-marine-applications.aspx [3] Newcleo and Fincantieri unveil lead-cooled fast nuclear reactor for ... https://www.bairdmaritime.com/shipping/newcleo-and-fincantieri-unveil-lead-cooled-fast-nuclear-reactor-for-maritime-applications [4] Fincantieri Launches Nuclear-Propulsion Study With Newcleo https://maritime-executive.com/article/fincantieri-joints-nuclear-propulsion-study-with-newcleo [5] Industry Partners Showcase Next-Gen Nuclear Reactor For https://www.marinelink.com/news/industry-partners-showcase-nextgen-525512 [6] Fincantieri, newcleo and Pininfarina reveal the look of new nuclear ... https://www.fincantieri.com/en/media/press-releases/2025/fincantieri-newcleo-pininfarina-at-the-venice-biennale/ [7] Fincantieri S p A : , newcleo and Pininfarina reveal the look of new ... https://www.marketscreener.com/quote/stock/FINCANTIERI-S-P-A-16796084/news/Fincantieri-S-p-A-newcleo-and-Pininfarina-reveal-the-look-of-new-nuclear-power-at-the-19th-Inter-49877851/ [8] Pininfarina - NucNet | Independent Nuclear News https://www.nucnet.org/news/tagged/Pininfarina [9] newcleo, Fincantieri and RINA working together on feasibility study ... https://www.rina.org/en/media/press/2023/07/25/newcleo-fincantieri-rina [10] Fincantieri, RINA team up with newcleo to test feasibility of nuclear ... https://www.offshore-energy.biz/fincantieri-rina-team-up-with-newcleo-to-test-feasibility-of-nuclear-naval-propulsion/ [11] newcleo #pininfarina #fincantieri #biennalearchitettura2025 ... https://www.linkedin.com/posts/newcleo_newcleo-pininfarina-fincantieri-activity-7295080775878647810-6i6X [12] A Fourth-Generation Reactor at the Venice Architecture Biennale https://www.theplan.it/eng/whats_on/a-reactor-fourth-generation-pininfarina-newcleo-fincantieri
-
@ 32092ec1:8e9fd13a
2025-05-09 16:13:34Bitcoin controversies are not new, in fact, bitcoin’s past has been riddled with one controversy after another. From the viewpoint of a financial investor, controversies are not good. They drive investor fear by amplifying uncertainty and introducing doubt about the future viability of the project. Although bitcoin is an investable asset, and there are certainly individuals, companies and countries who are investing in bitcoin strictly as a new, diversified financial asset that shows growth characteristics the likes of which humanity has rarely seen, bitcoin is not exclusively a financial instrument. Bitcoin is a network that is only as strong as the people running the miners who create the blocks, the nodes that propagate the blocks, and the users who hold the keys.
In this post I do not care to share my view on any of the latest bitcoin controversies, whether that be about OP_RETURNs, MSTR, Blackrock, MSTY, inscriptions, soft fork proposals, mempools, bitcoin strategic reserves, 21, chain surveillance, on-chain privacy, shitcoins projects, or anything else that anyone cares to opine about on Nostr or X to signal their support or opposition for what they think is critical to the success or failure of bitcoin. What I do care to share here is the ONLY winning strategy to defend bitcoin no matter what your opinion its future: USE THE TOOLS, RUN THE CODE, HOLD YOUR KEYS, TAKE RESPONSIBILITY.
In my humble opinion, the biggest problem with the current batch of bitcoin controversies is that so many people are sharing their opinions and expressing their fears about the future of bitcoin who are not even using the tools; they aren’t running the code, supporting the network or even holding their own keys. Everyone is free to share their opinions about the future of bitcoin and allowing everyone a voice is critical if we are serious about adopting the best ideas. Many visionaries have participated in the growth and development of bitcoin up to this point; we are truly standing on the shoulders of giants.
But before jumping to conclusions and spreading opinions about whether people are good faith or bad faith actors in the bitcoin space, I believe we should all hold ourselves to a certain standard. Set a minimum threshold of education for yourself before you jump on one side or the other of the latest controversy. We have tools that offer a level of sovereignty that is unparalleled in the legacy financial world, are you using them?Don’t like what MSTR is doing? Hold your keys in cold storage, then share your opinion.
Think Blackrock wants to use Bitcoin to launch a dystopian new world order agenda? Tell everyone to run a node, implement on-chain privacy strategies and hold their keys in cold storage.
Are Bitcoin developers making changes you don’t like? Use an alternative implementation, get a Bitaxe and construct your own block templates; for almost everyone this will be a stronger signal than shitposting or spreading FUD on the socials.
If you really want to defend bitcoin, think about how you are using it today. Who are you trusting to put your transactions into a block? Who are you trusting to propagate your transactions across the network? Who are you trusting to hold your keys? Who estimates your transaction fees? If you don’t trust the people who you are currently depending on, move your trust elsewhere, or increase your sovereignty and rely on yourself for some, or all, of these functions.
We make bitcoin what it is today, and we will define what bitcoin will be in the future. If we want to defend bitcoin it is our responsibility to keep it as decentralized as possible and that starts with every individual using the tools, running the code, doing the research and taking responsibility for their money.
895982
-
@ 91bea5cd:1df4451c
2025-04-15 06:19:19O que é Tahoe-LAFS?
Bem-vindo ao Tahoe-LAFS_, o primeiro sistema de armazenamento descentralizado com
- Segurança independente do provedor * .
Tahoe-LAFS é um sistema que ajuda você a armazenar arquivos. Você executa um cliente Programa no seu computador, que fala com um ou mais servidores de armazenamento em outros computadores. Quando você diz ao seu cliente para armazenar um arquivo, ele irá criptografar isso Arquivo, codifique-o em múltiplas peças, depois espalhe essas peças entre Vários servidores. As peças são todas criptografadas e protegidas contra Modificações. Mais tarde, quando você pede ao seu cliente para recuperar o arquivo, ele irá Encontre as peças necessárias, verifique se elas não foram corrompidas e remontadas Eles, e descriptografar o resultado.
O cliente cria mais peças (ou "compartilhamentos") do que acabará por precisar, então Mesmo que alguns servidores falhem, você ainda pode recuperar seus dados. Corrompido Os compartilhamentos são detectados e ignorados, de modo que o sistema pode tolerar o lado do servidor Erros no disco rígido. Todos os arquivos são criptografados (com uma chave exclusiva) antes Uploading, então mesmo um operador de servidor mal-intencionado não pode ler seus dados. o A única coisa que você pede aos servidores é que eles podem (geralmente) fornecer o Compartilha quando você os solicita: você não está confiando sobre eles para Confidencialidade, integridade ou disponibilidade absoluta.
O que é "segurança independente do provedor"?
Todo vendedor de serviços de armazenamento na nuvem irá dizer-lhe que o seu serviço é "seguro". Mas o que eles significam com isso é algo fundamentalmente diferente Do que queremos dizer. O que eles significam por "seguro" é que depois de ter dado Eles o poder de ler e modificar seus dados, eles tentam muito difícil de não deixar Esse poder seja abusado. Isso acaba por ser difícil! Insetos, Configurações incorretas ou erro do operador podem acidentalmente expor seus dados para Outro cliente ou para o público, ou pode corromper seus dados. Criminosos Ganho rotineiramente de acesso ilícito a servidores corporativos. Ainda mais insidioso é O fato de que os próprios funcionários às vezes violam a privacidade do cliente De negligência, avareza ou mera curiosidade. O mais consciencioso de Esses prestadores de serviços gastam consideráveis esforços e despesas tentando Mitigar esses riscos.
O que queremos dizer com "segurança" é algo diferente. * O provedor de serviços Nunca tem a capacidade de ler ou modificar seus dados em primeiro lugar: nunca. * Se você usa Tahoe-LAFS, então todas as ameaças descritas acima não são questões para você. Não só é fácil e barato para o provedor de serviços Manter a segurança de seus dados, mas na verdade eles não podem violar sua Segurança se eles tentaram. Isto é o que chamamos de * independente do fornecedor segurança*.
Esta garantia está integrada naturalmente no sistema de armazenamento Tahoe-LAFS e Não exige que você execute um passo de pré-criptografia manual ou uma chave complicada gestão. (Afinal, ter que fazer operações manuais pesadas quando Armazenar ou acessar seus dados anularia um dos principais benefícios de Usando armazenamento em nuvem em primeiro lugar: conveniência.)
Veja como funciona:
Uma "grade de armazenamento" é constituída por uma série de servidores de armazenamento. Um servidor de armazenamento Tem armazenamento direto em anexo (tipicamente um ou mais discos rígidos). Um "gateway" Se comunica com os nós de armazenamento e os usa para fornecer acesso ao Rede sobre protocolos como HTTP (S), SFTP ou FTP.
Observe que você pode encontrar "cliente" usado para se referir aos nós do gateway (que atuam como Um cliente para servidores de armazenamento) e também para processos ou programas que se conectam a Um nó de gateway e operações de execução na grade - por exemplo, uma CLI Comando, navegador da Web, cliente SFTP ou cliente FTP.
Os usuários não contam com servidores de armazenamento para fornecer * confidencialidade * nem
- Integridade * para seus dados - em vez disso, todos os dados são criptografados e Integridade verificada pelo gateway, para que os servidores não possam ler nem Modifique o conteúdo dos arquivos.
Os usuários dependem de servidores de armazenamento para * disponibilidade *. O texto cifrado é Codificado por apagamento em partes
N
distribuídas em pelo menosH
distintas Servidores de armazenamento (o valor padrão paraN
é 10 e paraH
é 7) então Que pode ser recuperado de qualquerK
desses servidores (o padrão O valor deK
é 3). Portanto, apenas a falha doH-K + 1
(com o Padrões, 5) servidores podem tornar os dados indisponíveis.No modo de implantação típico, cada usuário executa seu próprio gateway sozinho máquina. Desta forma, ela confia em sua própria máquina para a confidencialidade e Integridade dos dados.
Um modo de implantação alternativo é que o gateway é executado em uma máquina remota e O usuário se conecta ao HTTPS ou SFTP. Isso significa que o operador de O gateway pode visualizar e modificar os dados do usuário (o usuário * depende de * o Gateway para confidencialidade e integridade), mas a vantagem é que a O usuário pode acessar a grade Tahoe-LAFS com um cliente que não possui o Software de gateway instalado, como um quiosque de internet ou celular.
Controle de acesso
Existem dois tipos de arquivos: imutáveis e mutáveis. Quando você carrega um arquivo Para a grade de armazenamento, você pode escolher o tipo de arquivo que será no grade. Os arquivos imutáveis não podem ser modificados quando foram carregados. UMA O arquivo mutable pode ser modificado por alguém com acesso de leitura e gravação. Um usuário Pode ter acesso de leitura e gravação a um arquivo mutable ou acesso somente leitura, ou não Acesso a ele.
Um usuário que tenha acesso de leitura e gravação a um arquivo mutable ou diretório pode dar Outro acesso de leitura e gravação do usuário a esse arquivo ou diretório, ou eles podem dar Acesso somente leitura para esse arquivo ou diretório. Um usuário com acesso somente leitura Para um arquivo ou diretório pode dar acesso a outro usuário somente leitura.
Ao vincular um arquivo ou diretório a um diretório pai, você pode usar um Link de leitura-escrita ou um link somente de leitura. Se você usar um link de leitura e gravação, então Qualquer pessoa que tenha acesso de leitura e gravação ao diretório pai pode obter leitura-escrita Acesso à criança e qualquer pessoa que tenha acesso somente leitura ao pai O diretório pode obter acesso somente leitura à criança. Se você usar uma leitura somente Link, qualquer pessoa que tenha lido-escrito ou acesso somente leitura ao pai O diretório pode obter acesso somente leitura à criança.
================================================== ==== Usando Tahoe-LAFS com uma rede anônima: Tor, I2P ================================================== ====
. `Visão geral '
. `Casos de uso '
.
Software Dependencies
_#.
Tor
#.I2P
. `Configuração de conexão '
. `Configuração de Anonimato '
#.
Anonimato do cliente ' #.
Anonimato de servidor, configuração manual ' #. `Anonimato de servidor, configuração automática '. `Problemas de desempenho e segurança '
Visão geral
Tor é uma rede anonimização usada para ajudar a esconder a identidade da Internet Clientes e servidores. Consulte o site do Tor Project para obter mais informações: Https://www.torproject.org/
I2P é uma rede de anonimato descentralizada que se concentra no anonimato de ponta a ponta Entre clientes e servidores. Consulte o site I2P para obter mais informações: Https://geti2p.net/
Casos de uso
Existem três casos de uso potenciais para Tahoe-LAFS do lado do cliente:
-
O usuário deseja sempre usar uma rede de anonimato (Tor, I2P) para proteger Seu anonimato quando se conecta às redes de armazenamento Tahoe-LAFS (seja ou Não os servidores de armazenamento são anônimos).
-
O usuário não se preocupa em proteger seu anonimato, mas eles desejam se conectar a Servidores de armazenamento Tahoe-LAFS que são acessíveis apenas através de Tor Hidden Services ou I2P.
-
Tor é usado apenas se uma sugestão de conexão do servidor usar
tor:
. Essas sugestões Geralmente tem um endereço.onion
. -
I2P só é usado se uma sugestão de conexão do servidor usa
i2p:
. Essas sugestões Geralmente têm um endereço.i2p
. -
O usuário não se preocupa em proteger seu anonimato ou para se conectar a um anonimato Servidores de armazenamento. Este documento não é útil para você ... então pare de ler.
Para servidores de armazenamento Tahoe-LAFS existem três casos de uso:
-
O operador deseja proteger o anonimato fazendo seu Tahoe Servidor acessível apenas em I2P, através de Tor Hidden Services, ou ambos.
-
O operador não * requer * anonimato para o servidor de armazenamento, mas eles Quer que ele esteja disponível tanto no TCP / IP roteado publicamente quanto através de um Rede de anonimização (I2P, Tor Hidden Services). Uma possível razão para fazer Isso é porque ser alcançável através de uma rede de anonimato é um Maneira conveniente de ignorar NAT ou firewall que impede roteios públicos Conexões TCP / IP ao seu servidor (para clientes capazes de se conectar a Tais servidores). Outro é o que torna o seu servidor de armazenamento acessível Através de uma rede de anonimato pode oferecer uma melhor proteção para sua Clientes que usam essa rede de anonimato para proteger seus anonimato.
-
O operador do servidor de armazenamento não se preocupa em proteger seu próprio anonimato nem Para ajudar os clientes a proteger o deles. Pare de ler este documento e execute Seu servidor de armazenamento Tahoe-LAFS usando TCP / IP com roteamento público.
Veja esta página do Tor Project para obter mais informações sobre Tor Hidden Services: Https://www.torproject.org/docs/hidden-services.html.pt
Veja esta página do Projeto I2P para obter mais informações sobre o I2P: Https://geti2p.net/en/about/intro
Dependências de software
Tor
Os clientes que desejam se conectar a servidores baseados em Tor devem instalar o seguinte.
-
Tor (tor) deve ser instalado. Veja aqui: Https://www.torproject.org/docs/installguide.html.en. No Debian / Ubuntu, Use
apt-get install tor
. Você também pode instalar e executar o navegador Tor Agrupar. -
Tahoe-LAFS deve ser instalado com o
[tor]
"extra" habilitado. Isso vai Instaletxtorcon
::
Pip install tahoe-lafs [tor]
Os servidores Tor-configurados manualmente devem instalar Tor, mas não precisam
Txtorcon
ou o[tor]
extra. Configuração automática, quando Implementado, vai precisar destes, assim como os clientes.I2P
Os clientes que desejam se conectar a servidores baseados em I2P devem instalar o seguinte. Tal como acontece com Tor, os servidores baseados em I2P configurados manualmente precisam do daemon I2P, mas Não há bibliotecas especiais de apoio Tahoe-side.
-
I2P deve ser instalado. Veja aqui: Https://geti2p.net/en/download
-
A API SAM deve estar habilitada.
-
Inicie o I2P.
- Visite http://127.0.0.1:7657/configclients no seu navegador.
- Em "Configuração do Cliente", marque a opção "Executar no Startup?" Caixa para "SAM Ponte de aplicação ".
- Clique em "Salvar Configuração do Cliente".
-
Clique no controle "Iniciar" para "ponte de aplicação SAM" ou reinicie o I2P.
-
Tahoe-LAFS deve ser instalado com o
[i2p]
extra habilitado, para obterTxi2p
::
Pip install tahoe-lafs [i2p]
Tor e I2P
Os clientes que desejam se conectar a servidores baseados em Tor e I2P devem instalar tudo acima. Em particular, Tahoe-LAFS deve ser instalado com ambos Extras habilitados ::
Pip install tahoe-lafs [tor, i2p]
Configuração de conexão
Consulte: ref:
Connection Management
para uma descrição do[tor]
e
[I2p]
seções detahoe.cfg
. Estes controlam como o cliente Tahoe Conecte-se a um daemon Tor / I2P e, assim, faça conexões com Tor / I2P-baseadas Servidores.As seções
[tor]
e[i2p]
só precisam ser modificadas para serem usadas de forma incomum Configurações ou para habilitar a configuração automática do servidor.A configuração padrão tentará entrar em contato com um daemon local Tor / I2P Ouvindo as portas usuais (9050/9150 para Tor, 7656 para I2P). Enquanto Há um daemon em execução no host local e o suporte necessário Bibliotecas foram instaladas, os clientes poderão usar servidores baseados em Tor Sem qualquer configuração especial.
No entanto, note que esta configuração padrão não melhora a Anonimato: as conexões TCP normais ainda serão feitas em qualquer servidor que Oferece um endereço regular (cumpre o segundo caso de uso do cliente acima, não o terceiro). Para proteger o anonimato, os usuários devem configurar o
[Connections]
da seguinte maneira:[Conexões] Tcp = tor
Com isso, o cliente usará Tor (em vez de um IP-address -reviração de conexão direta) para alcançar servidores baseados em TCP.
Configuração de anonimato
Tahoe-LAFS fornece uma configuração "flag de segurança" para indicar explicitamente Seja necessário ou não a privacidade do endereço IP para um nó ::
[nó] Revelar-IP-address = (booleano, opcional)
Quando
revelar-IP-address = False
, Tahoe-LAFS se recusará a iniciar se algum dos As opções de configuração emtahoe.cfg
revelariam a rede do nó localização:-
[Conexões] tcp = tor
é necessário: caso contrário, o cliente faria Conexões diretas para o Introdução, ou qualquer servidor baseado em TCP que aprende Do Introdutor, revelando seu endereço IP para esses servidores e um Rede de espionagem. Com isso, Tahoe-LAFS só fará Conexões de saída através de uma rede de anonimato suportada. -
Tub.location
deve ser desativado ou conter valores seguros. este O valor é anunciado para outros nós através do Introdutor: é como um servidor Anuncia sua localização para que os clientes possam se conectar a ela. No modo privado, ele É um erro para incluir umtcp:
dica notub.location
. Modo privado Rejeita o valor padrão detub.location
(quando a chave está faltando Inteiramente), que éAUTO
, que usaifconfig
para adivinhar o nó Endereço IP externo, o que o revelaria ao servidor e a outros clientes.
Esta opção é ** crítica ** para preservar o anonimato do cliente (cliente Caso de uso 3 de "Casos de uso", acima). Também é necessário preservar uma Anonimato do servidor (caso de uso do servidor 3).
Esse sinalizador pode ser configurado (para falso), fornecendo o argumento
--hide-ip
para Os comandoscreate-node
,create-client
oucreate-introducer
.Observe que o valor padrão de
revelar-endereço IP
é verdadeiro, porque Infelizmente, esconder o endereço IP do nó requer software adicional para ser Instalado (conforme descrito acima) e reduz o desempenho.Anonimato do cliente
Para configurar um nó de cliente para anonimato,
tahoe.cfg
** deve ** conter o Seguindo as bandeiras de configuração ::[nó] Revelar-IP-address = False Tub.port = desativado Tub.location = desativado
Uma vez que o nodo Tahoe-LAFS foi reiniciado, ele pode ser usado anonimamente (cliente Caso de uso 3).
Anonimato do servidor, configuração manual
Para configurar um nó de servidor para ouvir em uma rede de anonimato, devemos primeiro Configure Tor para executar um "Serviço de cebola" e encaminhe as conexões de entrada para o Porto Tahoe local. Então, configuramos Tahoe para anunciar o endereço
.onion
Aos clientes. Também configuramos Tahoe para não fazer conexões TCP diretas.- Decida em um número de porta de escuta local, chamado PORT. Isso pode ser qualquer não utilizado Porta de cerca de 1024 até 65535 (dependendo do kernel / rede do host Config). Nós diremos a Tahoe para escutar nesta porta, e nós diremos a Tor para Encaminhe as conexões de entrada para ele.
- Decida em um número de porta externo, chamado VIRTPORT. Isso será usado no Localização anunciada e revelada aos clientes. Pode ser qualquer número de 1 Para 65535. Pode ser o mesmo que PORT, se quiser.
- Decida em um "diretório de serviço oculto", geralmente em
/ var / lib / tor / NAME
. Pediremos a Tor para salvar o estado do serviço de cebola aqui, e Tor irá Escreva o endereço.onion
aqui depois que ele for gerado.
Em seguida, faça o seguinte:
-
Crie o nó do servidor Tahoe (com
tahoe create-node
), mas não ** não ** Lança-o ainda. -
Edite o arquivo de configuração Tor (normalmente em
/ etc / tor / torrc
). Precisamos adicionar Uma seção para definir o serviço oculto. Se nossa PORT for 2000, VIRTPORT é 3000, e estamos usando/ var / lib / tor / tahoe
como o serviço oculto Diretório, a seção deve se parecer com ::HiddenServiceDir / var / lib / tor / tahoe HiddenServicePort 3000 127.0.0.1:2000
-
Reinicie Tor, com
systemctl restart tor
. Aguarde alguns segundos. -
Leia o arquivo
hostname
no diretório de serviço oculto (por exemplo,/ Var / lib / tor / tahoe / hostname
). Este será um endereço.onion
, comoU33m4y7klhz3b.onion
. Ligue para esta CEBOLA. -
Edite
tahoe.cfg
para configurartub.port
para usarTcp: PORT: interface = 127.0.0.1
etub.location
para usarTor: ONION.onion: VIRTPORT
. Usando os exemplos acima, isso seria ::[nó] Revelar-endereço IP = falso Tub.port = tcp: 2000: interface = 127.0.0.1 Tub.location = tor: u33m4y7klhz3b.onion: 3000 [Conexões] Tcp = tor
-
Inicie o servidor Tahoe com
tahoe start $ NODEDIR
A seção
tub.port
fará com que o servidor Tahoe ouça no PORT, mas Ligue o soquete de escuta à interface de loopback, que não é acessível Do mundo exterior (mas * é * acessível pelo daemon Tor local). Então o A seçãotcp = tor
faz com que Tahoe use Tor quando se conecta ao Introdução, escondendo o endereço IP. O nó se anunciará a todos Clientes que usam `tub.location``, então os clientes saberão que devem usar o Tor Para alcançar este servidor (e não revelar seu endereço IP através do anúncio). Quando os clientes se conectam ao endereço da cebola, seus pacotes serão Atravessar a rede de anonimato e eventualmente aterrar no Tor local Daemon, que então estabelecerá uma conexão com PORT no localhost, que é Onde Tahoe está ouvindo conexões.Siga um processo similar para construir um servidor Tahoe que escuta no I2P. o O mesmo processo pode ser usado para ouvir tanto o Tor como o I2P (
tub.location = Tor: ONION.onion: VIRTPORT, i2p: ADDR.i2p
). Também pode ouvir tanto Tor como TCP simples (caso de uso 2), comtub.port = tcp: PORT
,tub.location = Tcp: HOST: PORT, tor: ONION.onion: VIRTPORT
eanonymous = false
(e omite A configuraçãotcp = tor
, já que o endereço já está sendo transmitido através de O anúncio de localização).Anonimato do servidor, configuração automática
Para configurar um nó do servidor para ouvir em uma rede de anonimato, crie o Nó com a opção
--listen = tor
. Isso requer uma configuração Tor que Ou lança um novo daemon Tor, ou tem acesso à porta de controle Tor (e Autoridade suficiente para criar um novo serviço de cebola). Nos sistemas Debian / Ubuntu, façaApt install tor
, adicione-se ao grupo de controle comadduser YOURUSERNAME debian-tor
e, em seguida, inicie sessão e faça o login novamente: se osgroups
O comando incluidebian-tor
na saída, você deve ter permissão para Use a porta de controle de domínio unix em/ var / run / tor / control
.Esta opção irá definir
revelar-IP-address = False
e[connections] tcp = Tor
. Ele alocará as portas necessárias, instruirá Tor para criar a cebola Serviço (salvando a chave privada em algum lugar dentro de NODEDIR / private /), obtenha O endereço.onion
e preenchatub.port
etub.location
corretamente.Problemas de desempenho e segurança
Se você estiver executando um servidor que não precisa ser Anônimo, você deve torná-lo acessível através de uma rede de anonimato ou não? Ou você pode torná-lo acessível * ambos * através de uma rede de anonimato E como um servidor TCP / IP rastreável publicamente?
Existem várias compensações efetuadas por esta decisão.
Penetração NAT / Firewall
Fazer com que um servidor seja acessível via Tor ou I2P o torna acessível (por Clientes compatíveis com Tor / I2P) mesmo que existam NAT ou firewalls que impeçam Conexões TCP / IP diretas para o servidor.
Anonimato
Tornar um servidor Tahoe-LAFS acessível * somente * via Tor ou I2P pode ser usado para Garanta que os clientes Tahoe-LAFS usem Tor ou I2P para se conectar (Especificamente, o servidor só deve anunciar endereços Tor / I2P no Chave de configuração
tub.location
). Isso evita que os clientes mal configurados sejam Desingonizando-se acidentalmente, conectando-se ao seu servidor através de A Internet rastreável.Claramente, um servidor que está disponível como um serviço Tor / I2P * e * a O endereço TCP regular não é anônimo: o endereço do .on e o real O endereço IP do servidor é facilmente vinculável.
Além disso, a interação, através do Tor, com um Tor Oculto pode ser mais Protegido da análise do tráfego da rede do que a interação, através do Tor, Com um servidor TCP / IP com rastreamento público
** XXX há um documento mantido pelos desenvolvedores de Tor que comprovem ou refutam essa crença? Se assim for, precisamos ligar a ele. Caso contrário, talvez devêssemos explicar mais aqui por que pensamos isso? **
Linkability
A partir de 1.12.0, o nó usa uma única chave de banheira persistente para saída Conexões ao Introdutor e conexões de entrada para o Servidor de Armazenamento (E Helper). Para os clientes, uma nova chave Tub é criada para cada servidor de armazenamento Nós aprendemos sobre, e essas chaves são * não * persistiram (então elas mudarão cada uma delas Tempo que o cliente reinicia).
Clientes que atravessam diretórios (de rootcap para subdiretório para filecap) são É provável que solicitem os mesmos índices de armazenamento (SIs) na mesma ordem de cada vez. Um cliente conectado a vários servidores irá pedir-lhes todos para o mesmo SI em Quase ao mesmo tempo. E dois clientes que compartilham arquivos ou diretórios Irá visitar os mesmos SI (em várias ocasiões).
Como resultado, as seguintes coisas são vinculáveis, mesmo com
revelar-endereço IP = Falso
:- Servidores de armazenamento podem vincular reconhecer várias conexões do mesmo Cliente ainda não reiniciado. (Observe que o próximo recurso de Contabilidade pode Faz com que os clientes apresentem uma chave pública persistente do lado do cliente quando Conexão, que será uma ligação muito mais forte).
- Os servidores de armazenamento provavelmente podem deduzir qual cliente está acessando dados, por Olhando as SIs sendo solicitadas. Vários servidores podem conciliar Determine que o mesmo cliente está falando com todos eles, mesmo que o TubIDs são diferentes para cada conexão.
- Os servidores de armazenamento podem deduzir quando dois clientes diferentes estão compartilhando dados.
- O Introdutor pode entregar diferentes informações de servidor para cada um Cliente subscrito, para particionar clientes em conjuntos distintos de acordo com Quais as conexões do servidor que eles eventualmente fazem. Para clientes + nós de servidor, ele Também pode correlacionar o anúncio do servidor com o cliente deduzido identidade.
atuação
Um cliente que se conecta a um servidor Tahoe-LAFS com rastreamento público através de Tor Incorrem em latência substancialmente maior e, às vezes, pior Mesmo cliente se conectando ao mesmo servidor através de um TCP / IP rastreável normal conexão. Quando o servidor está em um Tor Hidden Service, ele incorre ainda mais Latência e, possivelmente, ainda pior rendimento.
Conectando-se a servidores Tahoe-LAFS que são servidores I2P incorrem em maior latência E pior rendimento também.
Efeitos positivos e negativos em outros usuários Tor
O envio de seu tráfego Tahoe-LAFS sobre o Tor adiciona tráfego de cobertura para outros Tor usuários que também estão transmitindo dados em massa. Então isso é bom para Eles - aumentando seu anonimato.
No entanto, torna o desempenho de outros usuários do Tor Sessões - por exemplo, sessões ssh - muito pior. Isso é porque Tor Atualmente não possui nenhuma prioridade ou qualidade de serviço Recursos, para que as teclas de Ssh de outra pessoa possam ter que esperar na fila Enquanto o conteúdo do arquivo em massa é transmitido. O atraso adicional pode Tornar as sessões interativas de outras pessoas inutilizáveis.
Ambos os efeitos são duplicados se você carregar ou baixar arquivos para um Tor Hidden Service, em comparação com se você carregar ou baixar arquivos Over Tor para um servidor TCP / IP com rastreamento público
Efeitos positivos e negativos em outros usuários do I2P
Enviar seu tráfego Tahoe-LAFS ao I2P adiciona tráfego de cobertura para outros usuários do I2P Que também estão transmitindo dados. Então, isso é bom para eles - aumentando sua anonimato. Não prejudicará diretamente o desempenho de outros usuários do I2P Sessões interativas, porque a rede I2P possui vários controles de congestionamento e Recursos de qualidade de serviço, como priorizar pacotes menores.
No entanto, se muitos usuários estão enviando tráfego Tahoe-LAFS ao I2P e não tiverem Seus roteadores I2P configurados para participar de muito tráfego, então o I2P A rede como um todo sofrerá degradação. Cada roteador Tahoe-LAFS que usa o I2P tem Seus próprios túneis de anonimato que seus dados são enviados. Em média, um O nó Tahoe-LAFS requer 12 outros roteadores I2P para participar de seus túneis.
Portanto, é importante que o seu roteador I2P esteja compartilhando a largura de banda com outros Roteadores, para que você possa retornar enquanto usa o I2P. Isso nunca prejudicará a Desempenho de seu nó Tahoe-LAFS, porque seu roteador I2P sempre Priorize seu próprio tráfego.
=========================
Como configurar um servidor
Muitos nós Tahoe-LAFS são executados como "servidores", o que significa que eles fornecem serviços para Outras máquinas (isto é, "clientes"). Os dois tipos mais importantes são os Introdução e Servidores de armazenamento.
Para ser útil, os servidores devem ser alcançados pelos clientes. Os servidores Tahoe podem ouvir Em portas TCP e anunciar sua "localização" (nome do host e número da porta TCP) Para que os clientes possam se conectar a eles. Eles também podem ouvir os serviços de cebola "Tor" E portas I2P.
Os servidores de armazenamento anunciam sua localização ao anunciá-lo ao Introdutivo, Que então transmite a localização para todos os clientes. Então, uma vez que a localização é Determinado, você não precisa fazer nada de especial para entregá-lo.
O próprio apresentador possui uma localização, que deve ser entregue manualmente a todos Servidores de armazenamento e clientes. Você pode enviá-lo para os novos membros do seu grade. Esta localização (juntamente com outros identificadores criptográficos importantes) é Escrito em um arquivo chamado
private / introducer.furl
no Presenter's Diretório básico, e deve ser fornecido como o argumento--introducer =
paraTahoe create-node
outahoe create-node
.O primeiro passo ao configurar um servidor é descobrir como os clientes irão alcançar. Então você precisa configurar o servidor para ouvir em algumas portas, e Depois configure a localização corretamente.
Configuração manual
Cada servidor tem duas configurações em seu arquivo
tahoe.cfg
:tub.port
, eTub.location
. A "porta" controla o que o nó do servidor escuta: isto Geralmente é uma porta TCP.A "localização" controla o que é anunciado para o mundo exterior. Isto é um "Sugestão de conexão foolscap", e inclui tanto o tipo de conexão (Tcp, tor ou i2p) e os detalhes da conexão (nome do host / endereço, porta número). Vários proxies, gateways e redes de privacidade podem ser Envolvido, então não é incomum para
tub.port
etub.location
para olhar diferente.Você pode controlar diretamente a configuração
tub.port
etub.location
Configurações, fornecendo--port =
e--location =
ao executartahoe Create-node
.Configuração automática
Em vez de fornecer
--port = / - location =
, você pode usar--listen =
. Os servidores podem ouvir em TCP, Tor, I2P, uma combinação desses ou nenhum. O argumento--listen =
controla quais tipos de ouvintes o novo servidor usará.--listen = none
significa que o servidor não deve ouvir nada. Isso não Faz sentido para um servidor, mas é apropriado para um nó somente cliente. o O comandotahoe create-client
inclui automaticamente--listen = none
.--listen = tcp
é o padrão e liga uma porta de escuta TCP padrão. Usar--listen = tcp
requer um argumento--hostname =
também, que será Incorporado no local anunciado do nó. Descobrimos que os computadores Não pode determinar de forma confiável seu nome de host acessível externamente, então, em vez de Ter o servidor adivinhar (ou escanear suas interfaces para endereços IP Isso pode ou não ser apropriado), a criação de nó requer que o usuário Forneça o nome do host.--listen = tor
conversará com um daemon Tor local e criará uma nova "cebola" Servidor "(que se parece comalzrgrdvxct6c63z.onion
).
--listen = i2p` conversará com um daemon I2P local e criará um novo servidor endereço. Consulte: doc:
anonymity-configuration` para obter detalhes.Você pode ouvir nos três usando
--listen = tcp, tor, i2p
.Cenários de implantação
A seguir, alguns cenários sugeridos para configurar servidores usando Vários transportes de rede. Estes exemplos não incluem a especificação de um Apresentador FURL que normalmente você gostaria quando provisionamento de armazenamento Nós. Para estes e outros detalhes de configuração, consulte : Doc:
configuration
.. `Servidor possui um nome DNS público '
.
Servidor possui um endereço público IPv4 / IPv6
_.
O servidor está por trás de um firewall com encaminhamento de porta
_.
Usando o I2P / Tor para evitar o encaminhamento da porta
_O servidor possui um nome DNS público
O caso mais simples é o local onde o host do servidor está diretamente conectado ao Internet, sem um firewall ou caixa NAT no caminho. A maioria dos VPS (Virtual Private Servidor) e servidores colocados são assim, embora alguns fornecedores bloqueiem Muitas portas de entrada por padrão.
Para esses servidores, tudo o que você precisa saber é o nome do host externo. O sistema O administrador irá dizer-lhe isso. O principal requisito é que este nome de host Pode ser pesquisado no DNS, e ele será mapeado para um endereço IPv4 ou IPv6 que Alcançará a máquina.
Se o seu nome de host for
example.net
, então você criará o introdutor como esta::Tahoe create-introducer --hostname example.com ~ / introducer
Ou um servidor de armazenamento como ::
Tahoe create-node --hostname = example.net
Estes irão alocar uma porta TCP (por exemplo, 12345), atribuir
tub.port
para serTcp: 12345
etub.location
serãotcp: example.com: 12345
.Idealmente, isso também deveria funcionar para hosts compatíveis com IPv6 (onde o nome DNS Fornece um registro "AAAA", ou ambos "A" e "AAAA"). No entanto Tahoe-LAFS O suporte para IPv6 é novo e ainda pode ter problemas. Por favor, veja o ingresso
# 867
_ para detalhes... _ # 867: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/867
O servidor possui um endereço público IPv4 / IPv6
Se o host tiver um endereço IPv4 (público) rotativo (por exemplo,
203.0.113.1```), mas Nenhum nome DNS, você precisará escolher uma porta TCP (por exemplo,
3457``) e usar o Segue::Tahoe create-node --port = tcp: 3457 - localização = tcp: 203.0.113.1: 3457
--port
é uma "string de especificação de ponto de extremidade" que controla quais locais Porta em que o nó escuta.--location
é a "sugestão de conexão" que ele Anuncia para outros, e descreve as conexões de saída que essas Os clientes irão fazer, por isso precisa trabalhar a partir da sua localização na rede.Os nós Tahoe-LAFS escutam em todas as interfaces por padrão. Quando o host é Multi-homed, você pode querer fazer a ligação de escuta ligar apenas a uma Interface específica, adicionando uma opção
interface =
ao--port =
argumento::Tahoe create-node --port = tcp: 3457: interface = 203.0.113.1 - localização = tcp: 203.0.113.1: 3457
Se o endereço público do host for IPv6 em vez de IPv4, use colchetes para Envolva o endereço e altere o tipo de nó de extremidade para
tcp6
::Tahoe create-node --port = tcp6: 3457 - localização = tcp: [2001: db8 :: 1]: 3457
Você pode usar
interface =
para vincular a uma interface IPv6 específica também, no entanto Você deve fazer uma barra invertida - escapar dos dois pontos, porque, de outra forma, eles são interpretados Como delimitadores pelo idioma de especificação do "ponto final" torcido. o--location =
argumento não precisa de dois pontos para serem escapados, porque eles são Envolto pelos colchetes ::Tahoe create-node --port = tcp6: 3457: interface = 2001 \: db8 \: \: 1 --location = tcp: [2001: db8 :: 1]: 3457
Para hosts somente IPv6 com registros DNS AAAA, se o simples
--hostname =
A configuração não funciona, eles podem ser informados para ouvir especificamente Porta compatível com IPv6 com este ::Tahoe create-node --port = tcp6: 3457 - localização = tcp: example.net: 3457
O servidor está por trás de um firewall com encaminhamento de porta
Para configurar um nó de armazenamento por trás de um firewall com encaminhamento de porta, você irá precisa saber:
- Endereço IPv4 público do roteador
- A porta TCP que está disponível de fora da sua rede
- A porta TCP que é o destino de encaminhamento
- Endereço IPv4 interno do nó de armazenamento (o nó de armazenamento em si é
Desconhece esse endereço e não é usado durante
tahoe create-node
, Mas o firewall deve ser configurado para enviar conexões para isso)
Os números de porta TCP internos e externos podem ser iguais ou diferentes Dependendo de como o encaminhamento da porta está configurado. Se é mapear portas 1-para-1, eo endereço IPv4 público do firewall é 203.0.113.1 (e Talvez o endereço IPv4 interno do nó de armazenamento seja 192.168.1.5), então Use um comando CLI como este ::
Tahoe create-node --port = tcp: 3457 - localização = tcp: 203.0.113.1: 3457
Se no entanto, o firewall / NAT-box encaminha a porta externa * 6656 * para o interno Porta 3457, então faça isso ::
Tahoe create-node --port = tcp: 3457 - localização = tcp: 203.0.113.1: 6656
Usando o I2P / Tor para evitar o encaminhamento da porta
Os serviços de cebola I2P e Tor, entre outras excelentes propriedades, também fornecem NAT Penetração sem encaminhamento de porta, nomes de host ou endereços IP. Então, configurando Um servidor que escuta apenas no Tor é simples ::
Tahoe create-node --listen = tor
Para mais informações sobre o uso de Tahoe-LAFS com I2p e Tor veja : Doc:
anonymity-configuration
-
@ c9badfea:610f861a
2025-05-09 15:32:59- Install Feeder (it's free and open source)
- Discover RSS feeds from various sources (see links below)
- Copy the Feed URL
- Open Feeder, tap the ⁞ icon, and choose Add Feed
- Paste the Feed URL and tap Search
- Select the found RSS feed item
- Scroll down and tap OK
Some Sources
ℹ️ You can also use YouTube channel URLs as feeds
-
@ 52b4a076:e7fad8bd
2025-05-03 21:54:45Introduction
Me and Fishcake have been working on infrastructure for Noswhere and Nostr.build. Part of this involves processing a large amount of Nostr events for features such as search, analytics, and feeds.
I have been recently developing
nosdex
v3, a newer version of the Noswhere scraper that is designed for maximum performance and fault tolerance using FoundationDB (FDB).Fishcake has been working on a processing system for Nostr events to use with NB, based off of Cloudflare (CF) Pipelines, which is a relatively new beta product. This evening, we put it all to the test.
First preparations
We set up a new CF Pipelines endpoint, and I implemented a basic importer that took data from the
nosdex
database. This was quite slow, as it did HTTP requests synchronously, but worked as a good smoke test.Asynchronous indexing
I implemented a high-contention queue system designed for highly parallel indexing operations, built using FDB, that supports: - Fully customizable batch sizes - Per-index queues - Hundreds of parallel consumers - Automatic retry logic using lease expiration
When the scraper first gets an event, it will process it and eventually write it to the blob store and FDB. Each new event is appended to the event log.
On the indexing side, a
Queuer
will read the event log, and batch events (usually 2K-5K events) into one work job. This work job contains: - A range in the log to index - Which target this job is intended for - The size of the job and some other metadataEach job has an associated leasing state, which is used to handle retries and prioritization, and ensure no duplication of work.
Several
Worker
s monitor the index queue (up to 128) and wait for new jobs that are available to lease.Once a suitable job is found, the worker acquires a lease on the job and reads the relevant events from FDB and the blob store.
Depending on the indexing type, the job will be processed in one of a number of ways, and then marked as completed or returned for retries.
In this case, the event is also forwarded to CF Pipelines.
Trying it out
The first attempt did not go well. I found a bug in the high-contention indexer that led to frequent transaction conflicts. This was easily solved by correcting an incorrectly set parameter.
We also found there were other issues in the indexer, such as an insufficient amount of threads, and a suspicious decrease in the speed of the
Queuer
during processing of queued jobs.Along with fixing these issues, I also implemented other optimizations, such as deprioritizing
Worker
DB accesses, and increasing the batch size.To fix the degraded
Queuer
performance, I ran the backfill job by itself, and then started indexing after it had completed.Bottlenecks, bottlenecks everywhere
After implementing these fixes, there was an interesting problem: The DB couldn't go over 80K reads per second. I had encountered this limit during load testing for the scraper and other FDB benchmarks.
As I suspected, this was a client thread limitation, as one thread seemed to be using high amounts of CPU. To overcome this, I created a new client instance for each
Worker
.After investigating, I discovered that the Go FoundationDB client cached the database connection. This meant all attempts to create separate DB connections ended up being useless.
Using
OpenWithConnectionString
partially resolved this issue. (This also had benefits for service-discovery based connection configuration.)To be able to fully support multi-threading, I needed to enabled the FDB multi-client feature. Enabling it also allowed easier upgrades across DB versions, as FDB clients are incompatible across versions:
FDB_NETWORK_OPTION_EXTERNAL_CLIENT_LIBRARY="/lib/libfdb_c.so"
FDB_NETWORK_OPTION_CLIENT_THREADS_PER_VERSION="16"
Breaking the 100K/s reads barrier
After implementing support for the multi-threaded client, we were able to get over 100K reads per second.
You may notice after the restart (gap) the performance dropped. This was caused by several bugs: 1. When creating the CF Pipelines endpoint, we did not specify a region. The automatically selected region was far away from the server. 2. The amount of shards were not sufficient, so we increased them. 3. The client overloaded a few HTTP/2 connections with too many requests.
I implemented a feature to assign each
Worker
its own HTTP client, fixing the 3rd issue. We also moved the entire storage region to West Europe to be closer to the servers.After these changes, we were able to easily push over 200K reads/s, mostly limited by missing optimizations:
It's shards all the way down
While testing, we also noticed another issue: At certain times, a pipeline would get overloaded, stalling requests for seconds at a time. This prevented all forward progress on the
Worker
s.We solved this by having multiple pipelines: A primary pipeline meant to be for standard load, with moderate batching duration and less shards, and high-throughput pipelines with more shards.
Each
Worker
is assigned a pipeline on startup, and if one pipeline stalls, other workers can continue making progress and saturate the DB.The stress test
After making sure everything was ready for the import, we cleared all data, and started the import.
The entire import lasted 20 minutes between 01:44 UTC and 02:04 UTC, reaching a peak of: - 0.25M requests per second - 0.6M keys read per second - 140MB/s reads from DB - 2Gbps of network throughput
FoundationDB ran smoothly during this test, with: - Read times under 2ms - Zero conflicting transactions - No overloaded servers
CF Pipelines held up well, delivering batches to R2 without any issues, while reaching its maximum possible throughput.
Finishing notes
Me and Fishcake have been building infrastructure around scaling Nostr, from media, to relays, to content indexing. We consistently work on improving scalability, resiliency and stability, even outside these posts.
Many things, including what you see here, are already a part of Nostr.build, Noswhere and NFDB, and many other changes are being implemented every day.
If you like what you are seeing, and want to integrate it, get in touch. :)
If you want to support our work, you can zap this post, or register for nostr.land and nostr.build today.
-
@ 5d4b6c8d:8a1c1ee3
2025-05-03 14:18:36Comments: 3395 (Top Territory!!!) Posts: 306 (3rd) Stacking: 198k (2nd)
We're really bouncing back from the post-Super Bowl lull, with lots of contests and discussion threads. I think we've really found our niche with those two things.
The rest of Stacker News is experiencing declining activity, so our steady growth since February really tells me that we're on the right track.
Thanks for being part of our growing sports community!
originally posted at https://stacker.news/items/970289
-
@ fe820c8e:d2d76f04
2025-05-09 14:33:55Nostr Demo
This is your course initialization stub.
Please see the Docs to find out what is possible in LiaScript.
If you want to use instant help in your Atom IDE, please type lia to see all available shortcuts.
Markdown
You can use common Markdown syntax to create your course, such as:
- Lists
-
ordered or
-
unordered
- ones ...
| Header 1 | Header 2 | | :--------- | :--------- | | Item 1 | Item 2 |
Images:
Extensions
--{{0}}--
But you can also include other features such as spoken text.
--{{1}}--
Insert any kind of audio file:
{{1}}
--{{2}}--
Even videos or change the language completely.
{{2-3}}
!?video
--{{3 Russian Female}}--
Первоначально создан в 2004 году Джоном Грубером (англ. John Gruber) и Аароном Шварцем. Многие идеи языка были позаимствованы из существующих соглашений по разметке текста в электронных письмах...
{{3}}
Type "voice" to see a list of all available languages.
Styling
The whole text-block should appear in purple color and with a wobbling effect. Which is a bad example, please use it with caution ... ~~ only this is red ;-) ~~
Charts
Use ASCII-Art to draw diagrams:
Multiline 1.9 | DOTS | *** y | * * - | r r r r r r r*r r r r*r r r r r r r a | * * x | * * i | B B B B B * B B B B B B * B B B B B s | * * | * * * * * * -1 +------------------------------------ 0 x-axis 1
Quizzes
A Textquiz
What did the fish say when he hit a concrete wall?
[[dam]]
Multiple Choice
Just add as many points as you wish:
[[X]] Only the **X** marks the correct point. [[ ]] Empty ones are wrong. [[X]] ...
Single Choice
Just add as many points as you wish:
[( )] ... [(X)] <-- Only the **X** is allowed. [( )] ...
Executable Code
A drawing example, for demonstrating that any JavaScript library can be used, also for drawing.
```javascript // Initialize a Line chart in the container with the ID chart1 new Chartist.Line('#chart1', { labels: [1, 2, 3, 4], series: [[100, 120, 180, 200]] });
// Initialize a Line chart in the container with the ID chart2 new Chartist.Bar('#chart2', { labels: [1, 2, 3, 4], series: [[5, 2, 8, 3]] }); ```
Projects
You can make your code executable and define projects:
``` js -EvalScript.js let who = data.first_name + " " + data.last_name;
if(data.online) { who + " is online"; } else { who + " is NOT online"; }
json +Data.json { "first_name" : "Sammy", "last_name" : "Shark", "online" : true } ```
More
Find out what you can even do more with quizzes:
https://liascript.github.io/course/?https://raw.githubusercontent.com/liaScript/docs/master/README.md
-
@ cefb08d1:f419beff
2025-05-03 11:01:47https://www.youtube.com/watch?v=BOqWgxCo7Kw
The Catch Up Day 1: Bonsoy Gold Coast Pro provides opening day dominance from upper echelon:
https://www.youtube.com/watch?v=B1uM0FnyPvA
Next Round, elimination:
Results of the 1st day, opening round: https://www.worldsurfleague.com/events/2025/ct/325/bonsoy-gold-coast-pro/results
originally posted at https://stacker.news/items/970160
-
@ 9223d2fa:b57e3de7
2025-04-15 02:54:0012,600 steps
-
@ cefb08d1:f419beff
2025-05-03 08:57:18There is a well-known legend about pelicans that has been told for centuries: it was believed that pelican parents would wound their own chests with their beaks to feed their young with their blood. In reality, pelicans actually catch fish in their large beaks and then press their beaks to their chicks’ mouths to feed them. The myth likely arose because young pelicans sometimes peck their mother's chest while competing for food, but the mother does not harm herself intentionally.
originally posted at https://stacker.news/items/970123
-
@ cefb08d1:f419beff
2025-05-03 08:43:37originally posted at https://stacker.news/items/970118
-
@ 8947a945:9bfcf626
2024-10-17 07:33:00Hello everyone on Nostr and all my watchersand followersfrom DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
On September 16, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
Here’s a brief overview of what happened
- The first scammer selected the art they wanted to buy and offered a high price for each piece. They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later. The minting gas fees were quite expensive, ranging from 0.15 to 0.2 ETH
-
The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose. Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
-
I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
- I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
- I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
- I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of 0.2 ETH. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
This raised a red flag, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms If you find yourself in a similar situation, be aware that scammers may be targeting you.
My Perspective why I Believe This is a Scam and What the Scammers Gain
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time Take care
Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery - Anime girl fanarts : HikariHarmony - HikariHarmony on Nostr - General art : KeshikiRakuen
-
@ 0fa80bd3:ea7325de
2025-04-09 21:19:39DAOs promised decentralization. They offered a system where every member could influence a project's direction, where money and power were transparently distributed, and decisions were made through voting. All of it recorded immutably on the blockchain, free from middlemen.
But something didn’t work out. In practice, most DAOs haven’t evolved into living, self-organizing organisms. They became something else: clubs where participation is unevenly distributed. Leaders remained - only now without formal titles. They hold influence through control over communications, task framing, and community dynamics. Centralization still exists, just wrapped in a new package.
But there's a second, less obvious problem. Crowds can’t create strategy. In DAOs, people vote for what "feels right to the majority." But strategy isn’t about what feels good - it’s about what’s necessary. Difficult, unpopular, yet forward-looking decisions often fail when put to a vote. A founder’s vision is a risk. But in healthy teams, it’s that risk that drives progress. In DAOs, risk is almost always diluted until it becomes something safe and vague.
Instead of empowering leaders, DAOs often neutralize them. This is why many DAOs resemble consensus machines. Everyone talks, debates, and participates, but very little actually gets done. One person says, “Let’s jump,” and five others respond, “Let’s discuss that first.” This dynamic might work for open forums, but not for action.
Decentralization works when there’s trust and delegation, not just voting. Until DAOs develop effective systems for assigning roles, taking ownership, and acting with flexibility, they will keep losing ground to old-fashioned startups led by charismatic founders with a clear vision.
We’ve seen this in many real-world cases. Take MakerDAO, one of the most mature and technically sophisticated DAOs. Its governance token (MKR) holders vote on everything from interest rates to protocol upgrades. While this has allowed for transparency and community involvement, the process is often slow and bureaucratic. Complex proposals stall. Strategic pivots become hard to implement. And in 2023, a controversial proposal to allocate billions to real-world assets passed only narrowly, after months of infighting - highlighting how vision and execution can get stuck in the mud of distributed governance.
On the other hand, Uniswap DAO, responsible for the largest decentralized exchange, raised governance participation only after launching a delegation system where token holders could choose trusted representatives. Still, much of the activity is limited to a small group of active contributors. The vast majority of token holders remain passive. This raises the question: is it really community-led, or just a formalized power structure with lower transparency?
Then there’s ConstitutionDAO, an experiment that went viral. It raised over $40 million in days to try and buy a copy of the U.S. Constitution. But despite the hype, the DAO failed to win the auction. Afterwards, it struggled with refund logistics, communication breakdowns, and confusion over governance. It was a perfect example of collective enthusiasm without infrastructure or planning - proof that a DAO can raise capital fast but still lack cohesion.
Not all efforts have failed. Projects like Gitcoin DAO have made progress by incentivizing small, individual contributions. Their quadratic funding mechanism rewards projects based on the number of contributors, not just the size of donations, helping to elevate grassroots initiatives. But even here, long-term strategy often falls back on a core group of organizers rather than broad community consensus.
The pattern is clear: when the stakes are low or the tasks are modular, DAOs can coordinate well. But when bold moves are needed—when someone has to take responsibility and act under uncertainty DAOs often freeze. In the name of consensus, they lose momentum.
That’s why the organization of the future can’t rely purely on decentralization. It must encourage individual initiative and the ability to take calculated risks. People need to see their contribution not just as a vote, but as a role with clear actions and expected outcomes. When the situation demands, they should be empowered to act first and present the results to the community afterwards allowing for both autonomy and accountability. That’s not a flaw in the system. It’s how real progress happens.
-
@ c4b5369a:b812dbd6
2025-04-15 07:26:16Offline transactions with Cashu
Over the past few weeks, I've been busy implementing offline capabilities into nutstash. I think this is one of the key value propositions of ecash, beinga a bearer instrument that can be used without internet access.
It does however come with limitations, which can lead to a bit of confusion. I hope this article will clear some of these questions up for you!
What is ecash/Cashu?
Ecash is the first cryptocurrency ever invented. It was created by David Chaum in 1983. It uses a blind signature scheme, which allows users to prove ownership of a token without revealing a link to its origin. These tokens are what we call ecash. They are bearer instruments, meaning that anyone who possesses a copy of them, is considered the owner.
Cashu is an implementation of ecash, built to tightly interact with Bitcoin, more specifically the Bitcoin lightning network. In the Cashu ecosystem,
Mints
are the gateway to the lightning network. They provide the infrastructure to access the lightning network, pay invoices and receive payments. Instead of relying on a traditional ledger scheme like other custodians do, the mint issues ecash tokens, to represent the value held by the users.How do normal Cashu transactions work?
A Cashu transaction happens when the sender gives a copy of his ecash token to the receiver. This can happen by any means imaginable. You could send the token through email, messenger, or even by pidgeon. One of the common ways to transfer ecash is via QR code.
The transaction is however not finalized just yet! In order to make sure the sender cannot double-spend their copy of the token, the receiver must do what we call a
swap
. A swap is essentially exchanging an ecash token for a new one at the mint, invalidating the old token in the process. This ensures that the sender can no longer use the same token to spend elsewhere, and the value has been transferred to the receiver.What about offline transactions?
Sending offline
Sending offline is very simple. The ecash tokens are stored on your device. Thus, no internet connection is required to access them. You can litteraly just take them, and give them to someone. The most convenient way is usually through a local transmission protocol, like NFC, QR code, Bluetooth, etc.
The one thing to consider when sending offline is that ecash tokens come in form of "coins" or "notes". The technical term we use in Cashu is
Proof
. It "proofs" to the mint that you own a certain amount of value. Since these proofs have a fixed value attached to them, much like UTXOs in Bitcoin do, you would need proofs with a value that matches what you want to send. You can mix and match multiple proofs together to create a token that matches the amount you want to send. But, if you don't have proofs that match the amount, you would need to go online and swap for the needed proofs at the mint.Another limitation is, that you cannot create custom proofs offline. For example, if you would want to lock the ecash to a certain pubkey, or add a timelock to the proof, you would need to go online and create a new custom proof at the mint.
Receiving offline
You might think: well, if I trust the sender, I don't need to be swapping the token right away!
You're absolutely correct. If you trust the sender, you can simply accept their ecash token without needing to swap it immediately.
This is already really useful, since it gives you a way to receive a payment from a friend or close aquaintance without having to worry about connectivity. It's almost just like physical cash!
It does however not work if the sender is untrusted. We have to use a different scheme to be able to receive payments from someone we don't trust.
Receiving offline from an untrusted sender
To be able to receive payments from an untrusted sender, we need the sender to create a custom proof for us. As we've seen before, this requires the sender to go online.
The sender needs to create a token that has the following properties, so that the receciver can verify it offline:
- It must be locked to ONLY the receiver's public key
- It must include an
offline signature proof
(DLEQ proof) - If it contains a timelock & refund clause, it must be set to a time in the future that is acceptable for the receiver
- It cannot contain duplicate proofs (double-spend)
- It cannot contain proofs that the receiver has already received before (double-spend)
If all of these conditions are met, then the receiver can verify the proof offline and accept the payment. This allows us to receive payments from anyone, even if we don't trust them.
At first glance, this scheme seems kinda useless. It requires the sender to go online, which defeats the purpose of having an offline payment system.
I beleive there are a couple of ways this scheme might be useful nonetheless:
-
Offline vending machines: Imagine you have an offline vending machine that accepts payments from anyone. The vending machine could use this scheme to verify payments without needing to go online itself. We can assume that the sender is able to go online and create a valid token, but the receiver doesn't need to be online to verify it.
-
Offline marketplaces: Imagine you have an offline marketplace where buyers and sellers can trade goods and services. Before going to the marketplace the sender already knows where he will be spending the money. The sender could create a valid token before going to the marketplace, using the merchants public key as a lock, and adding a refund clause to redeem any unspent ecash after it expires. In this case, neither the sender nor the receiver needs to go online to complete the transaction.
How to use this
Pretty much all cashu wallets allow you to send tokens offline. This is because all that the wallet needs to do is to look if it can create the desired amount from the proofs stored locally. If yes, it will automatically create the token offline.
Receiving offline tokens is currently only supported by nutstash (experimental).
To create an offline receivable token, the sender needs to lock it to the receiver's public key. Currently there is no refund clause! So be careful that you don't get accidentally locked out of your funds!
The receiver can then inspect the token and decide if it is safe to accept without a swap. If all checks are green, they can accept the token offline without trusting the sender.
The receiver will see the unswapped tokens on the wallet homescreen. They will need to manually swap them later when they are online again.
Later when the receiver is online again, they can swap the token for a fresh one.
Summary
We learned that offline transactions are possible with ecash, but there are some limitations. It either requires trusting the sender, or relying on either the sender or receiver to be online to verify the tokens, or create tokens that can be verified offline by the receiver.
I hope this short article was helpful in understanding how ecash works and its potential for offline transactions.
Cheers,
Gandlaf
-
@ 6389be64:ef439d32
2025-05-03 07:17:36In Jewish folklore, the golem—shaped from clay—is brought to life through sacred knowledge. Clay’s negative charge allows it to bind nutrients and water, echoing its mythic function as a vessel of potential.
Biochar in Amazonian terra preta shares this trait: it holds life-sustaining ions and harbors living intention. Both materials, inert alone, become generative through human action. The golem and black earths exist in parallel—one cultural, one ecological—shaping the lifeless into something that serves, protects, and endures.
originally posted at https://stacker.news/items/970089
-
@ e6817453:b0ac3c39
2024-10-06 11:21:27Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
@ a5c85eea:82503ea8
2025-05-09 14:28:33Nostr - Test
This is your course initialization stub.
Please see the Docs to find out what is possible in LiaScript.
If you want to use instant help in your Atom IDE, please type lia to see all available shortcuts.
Markdown
You can use common Markdown syntax to create your course, such as:
- Lists
-
ordered or
-
unordered
- ones ...
| Header 1 | Header 2 | | :--------- | :--------- | | Item 1 | Item 2 |
Images:
Extensions
--{{0}}--
But you can also include other features such as spoken text.
--{{1}}--
Insert any kind of audio file:
{{1}}
--{{2}}--
Even videos or change the language completely.
{{2-3}}
!?video
--{{3 Russian Female}}--
Первоначально создан в 2004 году Джоном Грубером (англ. John Gruber) и Аароном Шварцем. Многие идеи языка были позаимствованы из существующих соглашений по разметке текста в электронных письмах...
{{3}}
Type "voice" to see a list of all available languages.
Styling
The whole text-block should appear in purple color and with a wobbling effect. Which is a bad example, please use it with caution ... ~~ only this is red ;-) ~~
Charts
Use ASCII-Art to draw diagrams:
Multiline 1.9 | DOTS | *** y | * * - | r r r r r r r*r r r r*r r r r r r r a | * * x | * * i | B B B B B * B B B B B B * B B B B B s | * * | * * * * * * -1 +------------------------------------ 0 x-axis 1
Quizzes
A Textquiz
What did the fish say when he hit a concrete wall?
[[dam]]
Multiple Choice
Just add as many points as you wish:
[[X]] Only the **X** marks the correct point. [[ ]] Empty ones are wrong. [[X]] ...
Single Choice
Just add as many points as you wish:
[( )] ... [(X)] <-- Only the **X** is allowed. [( )] ...
Executable Code
A drawing example, for demonstrating that any JavaScript library can be used, also for drawing.
```javascript // Initialize a Line chart in the container with the ID chart1 new Chartist.Line('#chart1', { labels: [1, 2, 3, 4], series: [[100, 120, 180, 200]] });
// Initialize a Line chart in the container with the ID chart2 new Chartist.Bar('#chart2', { labels: [1, 2, 3, 4], series: [[5, 2, 8, 3]] }); ```
Projects
You can make your code executable and define projects:
``` js -EvalScript.js let who = data.first_name + " " + data.last_name;
if(data.online) { who + " is online"; } else { who + " is NOT online"; }
json +Data.json { "first_name" : "Sammy", "last_name" : "Shark", "online" : true } ```
More
Find out what you can even do more with quizzes:
https://liascript.github.io/course/?https://raw.githubusercontent.com/liaScript/docs/master/README.md
-
@ ba3b4b1d:eadff0d3
2025-05-09 23:40:27O culto de Sol Invictus (Sol Invicto, ou "Sol Não Vencido") foi oficialmente instituído como religião estatal do Império Romano pelo imperador Aureliano em 274 d.C., marcando um momento relevante na história religiosa romana. Este culto, que celebrava o Sol como divindade suprema e invencível, estabeleceu celebrações anuais a 25 de Dezembro, uma data estrategicamente escolhida pela sua proximidade com o solstício de Inverno no calendário juliano. Contudo, o culto de Sol Invictus não surgiu isoladamente; tratava-se, na verdade, de uma reformulação e revitalização de práticas religiosas romanas muito mais antigas que veneravam Sol, a personificação deificada do Sol, uma figura presente na religião romana desde os tempos da República.
Antes da oficialização por Aureliano, o culto ao Sol já era praticado em Roma sob diversas formas. Na mitologia romana, Sol era frequentemente associado a deuses solares de outras culturas, como o grego Hélios e, mais tarde, o persa Mitra, cujo culto (o mitraísmo) alcançou grande popularidade entre os soldados romanos nos séculos II e III d.C. O mitraísmo, com os seus rituais iniciáticos e ênfase no Sol como símbolo de renovação e vitória, influenciou profundamente a iconografia e os conceitos do culto de Sol Invictus. A decisão de Aureliano de promover Sol Invictus como divindade unificadora do Império não foi apenas religiosa, mas também política: numa era de crises internas e externas, o culto ao Sol servia como símbolo de estabilidade, poder e unidade, sobretudo após a vitória de Aureliano contra a rainha Zenóbia do Reino de Palmira, cuja derrota foi atribuída à protecção divina do Sol.
Um dos festivais mais significativos associados ao culto era o Dies Natalis Solis Invicti ("Aniversário do Sol Invicto"), celebrado no solstício de Inverno, por volta de 21 de Dezembro, quando o dia começava a prolongar-se novamente, simbolizando a vitória do Sol sobre a escuridão. Este festival, profundamente enraizado no ciclo natural e astronómico, celebrava o renascimento do Sol e a sua força invencível. A proximidade temporal entre o Dies Natalis e a data de 25 de Dezembro, oficialmente adoptada para Sol Invictus, facilitou a sobreposição de significados e práticas religiosas. A ascensão do Cristianismo no Império Romano, particularmente após a conversão do imperador Constantino no início do século IV, conduziu a uma complexa interacção entre os cultos pagãos e a nova religião. É amplamente reconhecido pelos historiadores que o Cristianismo, ao estabelecer-se como religião dominante, absorveu e adaptou elementos de festivais pagãos para facilitar a conversão das populações e consolidar a sua influência. A escolha do 25 de Dezembro como data do nascimento de Jesus Cristo, que não tem fundamento nos textos bíblicos canónicos, é um exemplo evidente deste sincretismo. A data coincidia convenientemente com as celebrações de Sol Invictus e outros festivais pagãos, como a Saturnália, um período de festividades em honra do deus Saturno que ocorria em meados de Dezembro. O simbolismo do Sol, associado à luz, renovação e vitória sobre as trevas, era facilmente adaptável à narrativa cristã. Jesus, frequentemente descrito como a "Luz do Mundo" nos evangelhos (João 8:12), foi associado ao simbolismo solar, e a sua ressurreição passou a ser celebrada como uma vitória sobre a morte, semelhante ao retorno triunfal do Sol no solstício. A iconografia cristã também reflectiu esta influência: imagens de Cristo com uma auréola, reminiscentes dos raios solares, ecoavam representações de Sol Invictus e Hélios. Além disso, o mitraísmo, com os seus rituais de renovação e ênfase num salvador divino, apresentava paralelos com o Cristianismo, o que poderá ter facilitado a transição de fiéis de um culto para o outro.
Importa notar que a adopção do 25 de Dezembro como o Natal cristão não foi imediata. Registos históricos sugerem que a data só foi formalmente estabelecida no século IV, durante o reinado de Constantino e sob a influência da Igreja de Roma. O Papa Júlio I, por volta de 350 d.C., é frequentemente creditado por oficializar a celebração do nascimento de Cristo nesta data. Esta escolha foi estratégica, pois permitiu que o Cristianismo se sobrepusesse aos festivais pagãos, reinterpretando as suas tradições num contexto cristão. Paralelamente, na Sibéria, os cultos xamânicos envolvendo o cogumelo Amanita muscaria também se relacionam com o solstício de Inverno. Povos como os Evenki e os Chukchi utilizavam este cogumelo alucinogénio em rituais para enfrentar o rigoroso Inverno, induzindo visões de renas voadoras e trenós, que alguns estudiosos associam à origem das lendas do Pai Natal. A Amanita muscaria, com o seu píleo vermelho e branco, crescia sob pinheiros, que se tornaram símbolos das árvores de Natal. Os xamãs siberianos distribuíam estes cogumelos secos como "presentes" durante o solstício, muitas vezes entrando nas casas pela chaminé devido à neve, um costume que ecoa na figura do Pai Natal. Assim, tanto o culto de Sol Invictus como os rituais siberianos da Amanita muscaria convergem no simbolismo do solstício de Inverno, influenciando, de formas distintas, as tradições natalícias cristãs.
A escolha de 25 de Dezembro facilitou a integração de tradições pagãs, como a Saturnália e os cultos solares, numa narrativa cristã, com Jesus como a "Luz do Mundo".
Assim, o culto de Sol Invictus e as suas raízes no culto romano a Sol desempenharam um papel crucial na formação das celebrações cristãs do Natal. A amalgamação de elementos pagãos e cristãos reflecte a capacidade do Cristianismo de se adaptar às culturas locais, transformando símbolos e rituais profundamente enraizados em novas expressões de fé. Este processo de sincretismo não só garantiu a sobrevivência do Cristianismo num mundo religiosamente diverso, como também moldou a forma como o Natal é celebrado até hoje, com ecos das antigas tradições solares ainda presentes na sua simbologia de luz e renovação.
-
@ 5d4b6c8d:8a1c1ee3
2025-05-03 02:29:16My month long endeavor to be less of a lazibones has concluded.
For the whole month, I fairly consistently did whatever little chores needed to be done, as soon as I noticed they needed to be done. That was mostly laundry, making the bed, and (un)loading the dishwasher, plus lots of random cleaning up after the dog and kid.
Even with focusing less on getting steps, my steps were up about 15% from the previous month (which had nicer weather, btw). These were less empty steps, too. I was actually being productive.
I'm not sure any of the JBP-esque room cleaning type benefits materialized, but it was good for me, so I'll try to carry some new habits forward.
originally posted at https://stacker.news/items/969995