-
@ 3bf0c63f:aefa459d
2024-06-13 15:40:18Why relay hints are important
Recently Coracle has removed support for following relay hints in Nostr event references.
Supposedly Coracle is now relying only on public key hints and
kind:10002
events to determine where to fetch events from a user. That is a catastrophic idea that destroys much of Nostr's flexibility for no gain at all.- Someone makes a post inside a community (either a NIP-29 community or a NIP-87 community) and others want to refer to that post in discussions in the external Nostr world of
kind:1
s -- now that cannot work because the person who created the post doesn't have the relays specific to those communities in their outbox list; - There is a discussion happening in a niche relay, for example, a relay that can only be accessed by the participants of a conference for the duration of that conference -- since that relay is not in anyone's public outbox list, it's impossible for anyone outside of the conference to ever refer to these events;
- Some big public relays, say, relay.damus.io, decide to nuke their databases or periodically delete old events, a user keeps using that big relay as their outbox because it is fast and reliable, but chooses to archive their old events in a dedicated archival relay, say, cellar.nostr.wine, while prudently not including that in their outbox list because that would make no sense -- now it is impossible for anyone to refer to old notes from this user even though they are publicly accessible in cellar.nostr.wine;
- There are topical relays that curate content relating to niche (non-microblogging) topics, say, cooking recipes, and users choose to publish their recipes to these relays only -- but now they can't refer to these relays in the external Nostr world of
kind:1
s because these topical relays are not in their outbox lists. - Suppose a user wants to maintain two different identities under the same keypair, say, one identity only talks about soccer in English, while the other only talks about art history in French, and the user very prudently keeps two different
kind:10002
events in two different sets of "indexer" relays (or does it in some better way of announcing different relay sets) -- now one of this user's audiences cannot ever see notes created by him with their other persona, one half of the content of this user will be inacessible to the other half and vice-versa. - If for any reason a relay does not want to accept events of a certain kind a user may publish to other relays, and it would all work fine if the user referenced that externally-published event from a normal event, but now that externally-published event is not reachable because the external relay is not in the user's outbox list.
- If someone, say, Alex Jones, is hard-banned everywhere and cannot event broadcast
kind:10002
events to any of the commonly used index relays, that person will now appear as banned in most clients: in an ideal world in which clients followednprofile
and other relay hints Alex Jones could still live a normal Nostr life: he would print business cards with hisnprofile
instead of annpub
and clients would immediately know from what relay to fetch his posts. When other users shared his posts or replied to it, they would include a relay hint to his personal relay and others would be able to see and then start following him on that relay directly -- now Alex Jones's events cannot be read by anyone that doesn't already know his relay.
- Someone makes a post inside a community (either a NIP-29 community or a NIP-87 community) and others want to refer to that post in discussions in the external Nostr world of
-
@ 6871d8df:4a9396c1
2024-06-12 22:10:51Embracing AI: A Case for AI Accelerationism
In an era where artificial intelligence (AI) development is at the forefront of technological innovation, a counter-narrative championed by a group I refer to as the 'AI Decels'—those advocating for the deceleration of AI advancements— seems to be gaining significant traction. After tuning into a recent episode of the Joe Rogan Podcast, I realized that the prevailing narrative around AI was heading in a dangerous direction. Rogan had Aza Raskin and Tristan Harris, technology safety advocates, who released a talk called 'The AI Dilemma,' on for a discussion. You may know them from the popular documentary 'The Social Dilemma' on the dangers of social media. It became increasingly clear that the cautionary stance dominating this discourse might be tipping the scales too far, veering towards an over-regulated future that stifles innovation rather than fostering it.
Are we moving too fast?
While acknowledging AI's benefits, Aza and Tristan fear it could be dangerous if not guided by ethical standards and safeguards. They believe AI development is moving too quickly and that the right incentives for its growth are not in place. They are concerned about the possibility of "civilizational overwhelm," where advanced AI technology far outpaces 21st-century governance. They fear a scenario where society and its institutions cannot manage or adapt to the rapid changes and challenges introduced by AI.
They argue for regulating and slowing down AI development due to rapid, uncontrolled advancement driven by competition among companies like Google, OpenAI, and Microsoft. They claim this race can lead to unsafe releases of new technologies, with AI systems exhibiting unpredictable, emergent behaviors, posing significant societal risks. For instance, AI can inadvertently learn tasks like sentiment analysis or human emotion understanding, creating potential for misuse in areas like biological weapons or cybersecurity vulnerabilities.
Moreover, AI companies' profit-driven incentives often conflict with the public good, prioritizing market dominance over safety and ethics. This misalignment can lead to technologies that maximize engagement or profits at societal expense, similar to the negative impacts seen with social media. To address these issues, they suggest government regulation to realign AI companies' incentives with safety, ethical considerations, and public welfare. Implementing responsible development frameworks focused on long-term societal impacts is essential for mitigating potential harm.
This isn't new
Though the premise of their concerns seems reasonable, it's dangerous and an all too common occurrence with the emergence of new technologies. For example, in their example in the podcast, they refer to the technological breakthrough of oil. Oil as energy was a technological marvel and changed the course of human civilization. The embrace of oil — now the cornerstone of industry in our age — revolutionized how societies operated, fueled economies, and connected the world in unprecedented ways. Yet recently, as ideas of its environmental and geopolitical ramifications propagated, the narrative around oil has shifted.
Tristan and Aza detail this shift and claim that though the period was great for humanity, we didn't have another technology to go to once the technological consequences became apparent. The problem with that argument is that we did innovate to a better alternative: nuclear. However, at its technological breakthrough, it was met with severe suspicions, from safety concerns to ethical debates over its use. This overregulation due to these concerns caused a decades-long stagnation in nuclear innovation, where even today, we are still stuck with heavy reliance on coal and oil. The scare tactics and fear-mongering had consequences, and, interestingly, they don't see the parallels with their current deceleration stance on AI.
These examples underscore a critical insight: the initial anxiety surrounding new technologies is a natural response to the unknowns they introduce. Yet, history shows that too much anxiety can stifle the innovation needed to address the problems posed by current technologies. The cycle of discovery, fear, adaptation, and eventual acceptance reveals an essential truth—progress requires not just the courage to innovate but also the resilience to navigate the uncertainties these innovations bring.
Moreover, believing we can predict and plan for all AI-related unknowns reflects overconfidence in our understanding and foresight. History shows that technological progress, marked by unexpected outcomes and discoveries, defies such predictions. The evolution from the printing press to the internet underscores progress's unpredictability. Hence, facing AI's future requires caution, curiosity, and humility. Acknowledging our limitations and embracing continuous learning and adaptation will allow us to harness AI's potential responsibly, illustrating that embracing our uncertainties, rather than pretending to foresee them, is vital to innovation.
The journey of technological advancement is fraught with both promise and trepidation. Historically, each significant leap forward, from the dawn of the industrial age to the digital revolution, has been met with a mix of enthusiasm and apprehension. Aza Raskin and Tristan Harris's thesis in the 'AI Dilemma' embodies the latter.
Who defines "safe?"
When slowing down technologies for safety or ethical reasons, the issue arises of who gets to define what "safe" or “ethical” mean? This inquiry is not merely technical but deeply ideological, touching the very core of societal values and power dynamics. For example, the push for Diversity, Equity, and Inclusion (DEI) initiatives shows how specific ideological underpinnings can shape definitions of safety and decency.
Take the case of the initial release of Google's AI chatbot, Gemini, which chose the ideology of its creators over truth. Luckily, the answers were so ridiculous that the pushback was sudden and immediate. My worry, however, is if, in correcting this, they become experts in making the ideological capture much more subtle. Large bureaucratic institutions' top-down safety enforcement creates a fertile ground for ideological capture of safety standards.
I claim that the issue is not the technology itself but the lens through which we view and regulate it. Suppose the gatekeepers of 'safety' are aligned with a singular ideology. In that case, AI development would skew to serve specific ends, sidelining diverse perspectives and potentially stifling innovative thought and progress.
In the podcast, Tristan and Aza suggest such manipulation as a solution. They propose using AI for consensus-building and creating "shared realities" to address societal challenges. In practice, this means that when individuals' viewpoints seem to be far apart, we can leverage AI to "bridge the gap." How they bridge the gap and what we would bridge it toward is left to the imagination, but to me, it is clear. Regulators will inevitably influence it from the top down, which, in my opinion, would be the opposite of progress.
In navigating this terrain, we must advocate for a pluralistic approach to defining safety, encompassing various perspectives and values achieved through market forces rather than a governing entity choosing winners. The more players that can play the game, the more wide-ranging perspectives will catalyze innovation to flourish.
Ownership & Identity
Just because we should accelerate AI forward does not mean I do not have my concerns. When I think about what could be the most devastating for society, I don't believe we have to worry about a Matrix-level dystopia; I worry about freedom. As I explored in "Whose data is it anyway?," my concern gravitates toward the issues of data ownership and the implications of relinquishing control over our digital identities. This relinquishment threatens our privacy and the integrity of the content we generate, leaving it susceptible to the inclinations and profit of a few dominant tech entities.
To counteract these concerns, a paradigm shift towards decentralized models of data ownership is imperative. Such standards would empower individuals with control over their digital footprints, ensuring that we develop AI systems with diverse, honest, and truthful perspectives rather than the massaged, narrow viewpoints of their creators. This shift safeguards individual privacy and promotes an ethical framework for AI development that upholds the principles of fairness and impartiality.
As we stand at the crossroads of technological innovation and ethical consideration, it is crucial to advocate for systems that place data ownership firmly in the hands of users. By doing so, we can ensure that the future of AI remains truthful, non-ideological, and aligned with the broader interests of society.
But what about the Matrix?
I know I am in the minority on this, but I feel that the concerns of AGI (Artificial General Intelligence) are generally overblown. I am not scared of reaching the point of AGI, and I think the idea that AI will become so intelligent that we will lose control of it is unfounded and silly. Reaching AGI is not reaching consciousness; being worried about it spontaneously gaining consciousness is a misplaced fear. It is a tool created by humans for humans to enhance productivity and achieve specific outcomes.
At a technical level, large language models (LLMs) are trained on extensive datasets and learning patterns from language and data through a technique called "unsupervised learning" (meaning the data is untagged). They predict the next word in sentences, refining their predictions through feedback to improve coherence and relevance. When queried, LLMs generate responses based on learned patterns, simulating an understanding of language to provide contextually appropriate answers. They will only answer based on the datasets that were inputted and scanned.
AI will never be "alive," meaning that AI lacks inherent agency, consciousness, and the characteristics of life, not capable of independent thought or action. AI cannot act independently of human control. Concerns about AI gaining autonomy and posing a threat to humanity are based on a misunderstanding of the nature of AI and the fundamental differences between living beings and machines. AI spontaneously developing a will or consciousness is more similar to thinking a hammer will start walking than us being able to create consciousness through programming. Right now, there is only one way to create consciousness, and I'm skeptical that is ever something we will be able to harness and create as humans. Irrespective of its complexity — and yes, our tools will continue to become evermore complex — machines, specifically AI, cannot transcend their nature as non-living, inanimate objects programmed and controlled by humans.
The advancement of AI should be seen as enhancing human capabilities, not as a path toward creating autonomous entities with their own wills. So, while AI will continue to evolve, improve, and become more powerful, I believe it will remain under human direction and control without the existential threats often sensationalized in discussions about AI's future.
With this framing, we should not view the race toward AGI as something to avoid. This will only make the tools we use more powerful, making us more productive. With all this being said, AGI is still much farther away than many believe.
Today's AI excels in specific, narrow tasks, known as narrow or weak AI. These systems operate within tightly defined parameters, achieving remarkable efficiency and accuracy that can sometimes surpass human performance in those specific tasks. Yet, this is far from the versatile and adaptable functionality that AGI represents.
Moreover, the exponential growth of computational power observed in the past decades does not directly translate to an equivalent acceleration in achieving AGI. AI's impressive feats are often the result of massive data inputs and computing resources tailored to specific tasks. These successes do not inherently bring us closer to understanding or replicating the general problem-solving capabilities of the human mind, which again would only make the tools more potent in our hands.
While AI will undeniably introduce challenges and change the aspects of conflict and power dynamics, these challenges will primarily stem from humans wielding this powerful tool rather than the technology itself. AI is a mirror reflecting our own biases, values, and intentions. The crux of future AI-related issues lies not in the technology's inherent capabilities but in how it is used by those wielding it. This reality is at odds with the idea that we should slow down development as our biggest threat will come from those who are not friendly to us.
AI Beget's AI
While the unknowns of AI development and its pitfalls indeed stir apprehension, it's essential to recognize the power of market forces and human ingenuity in leveraging AI to address these challenges. History is replete with examples of new technologies raising concerns, only for those very technologies to provide solutions to the problems they initially seemed to exacerbate. It looks silly and unfair to think of fighting a war with a country that never embraced oil and was still primarily getting its energy from burning wood.
The evolution of AI is no exception to this pattern. As we venture into uncharted territories, the potential issues that arise with AI—be it ethical concerns, use by malicious actors, biases in decision-making, or privacy intrusions—are not merely obstacles but opportunities for innovation. It is within the realm of possibility, and indeed, probability, that AI will play a crucial role in solving the problems it creates. The idea that there would be no incentive to address and solve these problems is to underestimate the fundamental drivers of technological progress.
Market forces, fueled by the demand for better, safer, and more efficient solutions, are powerful catalysts for positive change. When a problem is worth fixing, it invariably attracts the attention of innovators, researchers, and entrepreneurs eager to solve it. This dynamic has driven progress throughout history, and AI is poised to benefit from this problem-solving cycle.
Thus, rather than viewing AI's unknowns as sources of fear, we should see them as sparks of opportunity. By tackling the challenges posed by AI, we will harness its full potential to benefit humanity. By fostering an ecosystem that encourages exploration, innovation, and problem-solving, we can ensure that AI serves as a force for good, solving problems as profound as those it might create. This is the optimism we must hold onto—a belief in our collective ability to shape AI into a tool that addresses its own challenges and elevates our capacity to solve some of society's most pressing issues.
An AI Future
The reality is that it isn't whether AI will lead to unforeseen challenges—it undoubtedly will, as has every major technological leap in history. The real issue is whether we let fear dictate our path and confine us to a standstill or embrace AI's potential to address current and future challenges.
The approach to solving potential AI-related problems with stringent regulations and a slowdown in innovation is akin to cutting off the nose to spite the face. It's a strategy that risks stagnating the U.S. in a global race where other nations will undoubtedly continue their AI advancements. This perspective dangerously ignores that AI, much like the printing press of the past, has the power to democratize information, empower individuals, and dismantle outdated power structures.
The way forward is not less AI but more of it, more innovation, optimism, and curiosity for the remarkable technological breakthroughs that will come. We must recognize that the solution to AI-induced challenges lies not in retreating but in advancing our capabilities to innovate and adapt.
AI represents a frontier of limitless possibilities. If wielded with foresight and responsibility, it's a tool that can help solve some of the most pressing issues we face today. There are certainly challenges ahead, but I trust that with problems come solutions. Let's keep the AI Decels from steering us away from this path with their doomsday predictions. Instead, let's embrace AI with the cautious optimism it deserves, forging a future where technology and humanity advance to heights we can't imagine.
-
@ 3bf0c63f:aefa459d
2024-06-12 15:26:56How to do curation and businesses on Nostr
Suppose you want to start a Nostr business.
You might be tempted to make a closed platform that reuses Nostr identities and grabs (some) content from the external Nostr network, only to imprison it inside your thing -- and then you're going to run an amazing AI-powered algorithm on that content and "surface" only the best stuff and people will flock to your app.
This will be specially good if you're going after one of the many unexplored niches of Nostr in which reading immediately from people you know doesn't work as you generally want to discover new things from the outer world, such as:
- food recipe sharing;
- sharing of long articles about varying topics;
- markets for used goods;
- freelancer work and job offers;
- specific in-game lobbies and matchmaking;
- directories of accredited professionals;
- sharing of original music, drawings and other artistic creations;
- restaurant recommendations
- and so on.
But that is not the correct approach and damages the freedom and interoperability of Nostr, posing a centralization threat to the protocol. Even if it "works" and your business is incredibly successful it will just enshrine you as the head of a platform that controls users and thus is prone to all the bad things that happen to all these platforms. Your company will start to display ads and shape the public discourse, you'll need a big legal team, the FBI will talk to you, advertisers will play a big role and so on.
If you are interested in Nostr today that must be because you appreciate the fact that it is not owned by any companies, so it's safe to assume you don't want to be that company that owns it. So what should you do instead? Here's an idea in two steps:
- Write a Nostr client tailored to the niche you want to cover
If it's a music sharing thing, then the client will have a way to play the audio and so on; if it's a restaurant sharing it will have maps with the locations of the restaurants or whatever, you get the idea. Hopefully there will be a NIP or a NUD specifying how to create and interact with events relating to this niche, or you will write or contribute with the creation of one, because without interoperability none of this matters much.
The client should work independently of any special backend requirements and ideally be open-source. It should have a way for users to configure to which relays they want to connect to see "global" content -- i.e., they might want to connect to
wss://nostr.chrysalisrecords.com/
to see only the latest music releases accredited by that label or towss://nostr.indiemusic.com/
to get music from independent producers from that community.- Run a relay that does all the magic
This is where your value-adding capabilities come into play: if you have that magic sauce you should be able to apply it here. Your service, let's call it
wss://magicsaucemusic.com/
, will charge people or do some KYM (know your music) validation or use some very advanced AI sorcery to filter out the spam and the garbage and display the best content to your users who will request the global feed from it (["REQ", "_", {}]
), and this will cause people to want to publish to your relay while others will want to read from it.You set your relay as the default option in the client and let things happen. Your relay is like your "website" and people are free to connect to it or not. You don't own the network, you're just competing against other websites on a leveled playing field, so you're not responsible for it. Users get seamless browsing across multiple websites, unified identities, a unified interface (that could be different in a different client) and social interaction capabilities that work in the same way for all, and they do not depend on you, therefore they're more likely to trust you.
Does this centralize the network still? But this a simple and easy way to go about the matter and scales well in all aspects.
Besides allowing users to connect to specific relays for getting a feed of curated content, such clients should also do all kinds of "social" (i.e. following, commenting etc) activities (if they choose to do that) using the outbox model -- i.e. if I find a musician I like under
wss://magicsaucemusic.com
and I decide to follow them I should keep getting updates from them even if they get banned from that relay and start publishing onwss://nos.lol
orwss://relay.damus.io
or whatever relay that doesn't even know what music is.The hardcoded defaults and manual typing of relay URLs can be annoying. But I think it works well at the current stage of Nostr development. Soon, though, we can create events that recommend other relays or share relay lists specific to each kind of activity so users can get in-app suggestions of relays their friends are using to get their music from and so on. That kind of stuff can go a long way.
-
@ 3c984938:2ec11289
2024-06-09 14:40:55I'm having some pain in my heart about the U.S. elections.
Ever since Obama campaigned for office, an increase of young voters have come out of the woodwork. Things have not improved. They've actively told you that "your vote matters." I believe this to be a lie unless any citizen can demand at the gate, at the White House to be allowed to hold and point a gun to the president's head. (Relax, this is a hyperbole)
Why so dramatic? Well, what does the president do? Sign bills, commands the military, nominates new Fed chairman, ambassadors, supreme judges and senior officials all while traveling in luxury planes and living in a white palace for four years.
They promised Every TIME to protect citizen rights when they take the oath and office.
...They've broken this several times, with so-called "emergency-crisis"
The purpose of a president, today, it seems is to basically hire armed thugs to keep the citizens in check and make sure you "voluntarily continue to be a slave," to the system, hence the IRS. The corruption extends from the cop to the judge and even to politicians. The politicians get paid from lobbyists to create bills in congress for the president to sign. There's no right answer when money is involved with politicians. It is the same if you vote Obama, Biden, Trump, or Haley. They will wield the pen to serve themselves to say it will benefit the country.
In the first 100 years of presidency, the government wasn't even a big deal. They didn't even interfere with your life as much as they do today.
^^ You hold the power in your hands, don't let them take it. Don't believe me? Try to get a loan from a bank without a signature. Your signature is as good as gold (if not better) and is an original trademark.
Just Don't Vote. End the Fed. Opt out.
^^ I choose to form my own path, even if it means leaving everything I knew prior. It doesn't have to be a spiritual thing. Some, have called me religious because of this. We're all capable of greatness and having humanity.
✨Don't have a machine heart with a machine mind. Instead, choose to have a heart like the cowardly lion from the "Wizard Of Oz."
There's no such thing as a good president or politicians.
If there was, they would have issued non-interest Federal Reserve Notes. Lincoln and Kennedy tried to do this, they got shot.
There's still a banner of America there, but it's so far gone that I cannot even recognize it. However, I only see a bunch of 🏳🌈 pride flags.
✨Patrick Henry got it wrong, when he delivered his speech, "Give me liberty or give me death." Liberty and freedom are two completely different things.
Straightforward from Merriam-Webster Choose Right or left?
No control, to be 100% without restrictions- free.
✨I disagree with the example sentence given. Because you cannot advocate for human freedom and own slaves, it's contradicting it. Which was common in the founding days.
I can understand many may disagree with me, and you might be thinking, "This time will be different." I, respectfully, disagree, and the proxy wars are proof. Learn the importance of Bitcoin, every Satoshi is a step away from corruption.
✨What does it look like to pull the curtains from the "Wizard of Oz?"
Have you watched the video below, what 30 Trillion dollars in debt looks like visually? Even I was blown away. https://video.nostr.build/d58c5e1afba6d7a905a39407f5e695a4eb4a88ae692817a36ecfa6ca1b62ea15.mp4
I say this with love. Hear my plea?
Normally, I don't write about anything political. It just feels like a losing game. My energy feels it's in better use to learn new things, write and to create. Even a simple blog post as simple as this. Stack SATs, and stay humble.
<3 Onigirl
-
@ 3bf0c63f:aefa459d
2024-05-24 12:31:40About Nostr, email and subscriptions
I check my emails like once or twice a week, always when I am looking for something specific in there.
Then I go there and I see a bunch of other stuff I had no idea I was missing. Even many things I wish I had seen before actually. And sometimes people just expect and assume I would have checked emails instantly as they arrived.
It's so weird because I'm not making a point, I just don't remember to open the damn "gmail.com" URL.
I remember some people were making some a Nostr service a while ago that sent a DM to people with Nostr articles inside -- or some other forms of "subscription services on Nostr". It makes no sense at all.
Pulling in DMs from relays is exactly the same process (actually slightly more convoluted) than pulling normal public events, so why would a service assume that "sending a DM" was more likely to reach the target subscriber when the target had explicitly subscribed to that topic or writer?
Maybe due to how some specific clients work that is true, but fundamentally it is a very broken assumption that comes from some fantastic past era in which emails were 100% always seen and there was no way for anyone to subscribe to someone else's posts.
Building around such broken assumptions is the wrong approach. Instead we should be building new flows for subscribing to specific content from specific Nostr-native sources (creators directly or manual or automated curation providers, communities, relays etc), which is essentially what most clients are already doing anyway, but specifically Coracle's new custom feeds come to mind now.
This also reminds me of the interviewer asking the Farcaster creator if Farcaster made "email addresses available to content creators" completely ignoring all the cryptography and nature of the protocol (Farcaster is shit, but at least they tried, and in this example you could imagine the interviewer asking the same thing about Nostr).
I imagine that if the interviewer had asked these people who were working (or suggesting) the Nostr DM subscription flow they would have answered: "no, you don't get their email addresses, but you can send them uncensorable DMs!" -- and that, again, is getting everything backwards.
-
@ 3bf0c63f:aefa459d
2024-05-21 12:38:08Bitcoin transactions explained
A transaction is a piece of data that takes inputs and produces outputs. Forget about the blockchain thing, Bitcoin is actually just a big tree of transactions. The blockchain is just a way to keep transactions ordered.
Imagine you have 10 satoshis. That means you have them in an unspent transaction output (UTXO). You want to spend them, so you create a transaction. The transaction should reference unspent outputs as its inputs. Every transaction has an immutable id, so you use that id plus the index of the output (because transactions can have multiple outputs). Then you specify a script that unlocks that transaction and related signatures, then you specify outputs along with a script that locks these outputs.
As you can see, there's this lock/unlocking thing and there are inputs and outputs. Inputs must be unlocked by fulfilling the conditions specified by the person who created the transaction they're in. And outputs must be locked so anyone wanting to spend those outputs will need to unlock them.
For most of the cases locking and unlocking means specifying a public key whose controller (the person who has the corresponding private key) will be able to spend. Other fancy things are possible too, but we can ignore them for now.
Back to the 10 satoshis you want to spend. Since you've successfully referenced 10 satoshis and unlocked them, now you can specify the outputs (this is all done in a single step). You can specify one output of 10 satoshis, two of 5, one of 3 and one of 7, three of 3 and so on. The sum of outputs can't be more than 10. And if the sum of outputs is less than 10 the difference goes to fees. In the first days of Bitcoin you didn't need any fees, but now you do, otherwise your transaction won't be included in any block.
If you're still interested in transactions maybe you could take a look at this small chapter of that Andreas Antonopoulos book.
If you hate Andreas Antonopoulos because he is a communist shitcoiner or don't want to read more than half a page, go here: https://en.bitcoin.it/wiki/Coin_analogy
-
@ 256a7941:b828ba8d
2024-05-19 18:37:40ALMOST AS IF HAARP WAS USED TO TAKE DOWN IRANIAN PRESIDENT'S HELICOPTER
-
@ b60c3e76:c9d0f46e
2024-05-15 10:08:47KRIS menjamin semua golongan masyarakat mendapatkan perlakuan sama dari rumah sakit, baik pelayanan medis maupun nonmedis.
Demi memberikan peningkatan kualitas layanan kesehatan kepada masyarakat, pemerintah baru saja mengeluarkan Peraturan Presiden (Perpres) nomor 59 tahun 2024 tentang Jaminan Kesehatan. Melalui perpres itu, Presiden Joko Widodo (Jokowi) telah menghapus perbedaan kelas layanan 1, 2, dan 3 dalam Badan Penyelenggara Jaminan Sosial atau BPJS Kesehatan.
Layanan berbasis kelas itu diganti dengan KRIS (Kelas Rawat Inap Standar). Berkaitan dengan lahirnya Perpres 59/2024 tentang Perubahan Ketiga atas Perpres 82/2018 tentang Jaminan Kesehatan, Presiden Joko Widodo telah memerintahkan seluruh rumah sakit yang bekerja sama dengan BPJS Kesehatan melaksanakannya.
Kebijakan baru itu mulai berlaku per 8 Mei 2024 dan paling lambat 30 Juni 2025. Dalam jangka waktu tersebut, rumah sakit dapat menyelenggarakan sebagian atau seluruh pelayanan rawat inap berdasarkan KRIS sesuai dengan kemampuan rumah sakit.
Lantas apa yang menjadi pembeda dari sisi layanan dengan layanan rawat inap sesuai Perpres 59/2024? Dahulu sistem layanan rawat BPJS Kesehatan dibagi berdasarkan kelas yang dibagi masing-masing kelas 1, 2, dan 3. Namun, melalui perpres, layanan kepada masyarakat tidak dibedakan lagi.
Pelayanan rawat inap yang diatur dalam perpres itu--dikenal dengan nama KRIS—menjadi sistem baru yang digunakan dalam pelayanan rawat inap BPJS Kesehatan di rumah sakit-rumah sakit. Dengan KRIS, semua golongan masyarakat akan mendapatkan perlakuan yang sama dari rumah sakit, baik dalam hal pelayanan medis maupun nonmedis.
Dengan lahirnya Perpres 59/2024, tarif iuran BPJS Kesehatan pun juga akan berubah. Hanya saja, dalam Perpres itu belum dicantumkan secara rinci ihwal besar iuran yang baru. Besaran iuran baru BPJS Kesehatan itu sesuai rencana baru ditetapkan pada 1 Juli 2025.
“Penetapan manfaat, tarif, dan iuran sebagaimana dimaksud ditetapkan paling lambat tanggal 1 Juli 2025,” tulis aturan tersebut, dikutip Senin (13/5/2024).
Itu artinya, iuran BPJS Kesehatan saat ini masih sama seperti sebelumnya, yakni sesuai dengan kelas yang dipilih. Namun perpres itu tetap berlaku sembari menanti lahirnya peraturan lanjutan dari perpres tersebut.
Kesiapan Rumah Sakit
Berkaitan dengan lahirnya kebijakan layanan kesehatan tanpa dibedakan kelas lagi, Kementerian Kesehatan (Kemenkes) menegaskan mayoritas rumah sakit di Indonesia siap untuk menjalankan layanan KRIS untuk pasien BPJS Kesehatan.
Kesiapan itu diungkapkan oleh Dirjen Pelayanan Kesehatan Kemenkes Azhar Jaya. “Survei kesiapan RS terkait KRIS sudah dilakukan pada 2.988 rumah sakit dan yang sudah siap menjawab isian 12 kriteria ada sebanyak 2.233 rumah sakit,” ujar Azhar.
Sebagai informasi, KRIS adalah pengganti layanan Kelas 1, 2, dan 3 BPJS Kesehatan yang bertujuan untuk memberikan layanan kesehatan secara merata tanpa melihat besaran iurannya.
Melalui KRIS, rumah sakit perlu menyiapkan sarana dan prasarana sesuai dengan 12 kriteria kelas rawat inap standar secara bertahap. Apa saja ke-12 kriteria KRIS itu?
Sesuai bunyi Pasal 46A Perpres 59/2024, disyaratkan kriteria fasilitas perawatan dan pelayanan rawat inap KRIS meliputi komponen bangunan yang digunakan tidak boleh memiliki tingkat porositas yang tinggi serta terdapat ventilasi udara dan kelengkapan tidur.
Demikian pula soal pencahayaan ruangan. Perpres itu juga mengatur pencahayaan ruangan buatan mengikuti kriteria standar 250 lux untuk penerangan dan 50 lux untuk pencahayaan tidur, temperature ruangan 20--26 derajat celcius.
Tidak hanya itu, layanan rawat inap berdasarkan perpres itu mensyaratkan fasilitas layanan yang membagi ruang rawat berdasarkan jenis kelamin pasien, anak atau dewasa, serta penyakit infeksi atau noninfeksi.
Selain itu, kriteria lainnya adalah keharusan bagi penyedia layanan untuk mempertimbangkan kepadatan ruang rawat dan kualitas tempat tidur, penyediaan tirai atau partisi antartempat tidur, kamar mandi dalam ruangan rawat inap yang memenuhi standar aksesibilitas, dan menyediakan outlet oksigen.
Selain itu, kelengkapan tempat tidur berupa adanya dua kotak kontak dan nurse call pada setiap tempat tidur dan adanya nakas per tempat tidur. Kepadatan ruang rawat inap maksimal empat tempat tidur dengan jarak antara tepi tempat tidur minimal 1,5 meter.
Tirai/partisi dengan rel dibenamkan menempel di plafon atau menggantung. Kamar mandi dalam ruang rawat inap serta kamar mandi sesuai dengan standar aksesibilitas dan outlet oksigen.
Azhar menjamin, Kemenkes akan menjalankan hal tersebut sesuai dengan tupoksi yang ada. “Tentu saja kami akan bekerja sama dengan BPJS Kesehatan dalam implementasi dan pengawasannya di lapangan,” ujar Azhar.
Berkaitan dengan perpres jaminan kesehatan itu, Direktur Utama BPJS Kesehatan Ghufron Mukti menilai, perpres tersebut berorientasi pada penyeragaman kelas rawat inap yang mengacu pada 12 kriteria. "Bahwa perawatan ada kelas rawat inap standar dengan 12 kriteria, untuk peserta BPJS, maka sebagaimana sumpah dokter tidak boleh dibedakan pemberian pelayan medis atas dasar suku, agama, status sosial atau beda iurannya," ujarnya.
Jika ada peserta ingin dirawat pada kelas yang lebih tinggi, kata Ghufron, maka diperbolehkan selama hal itu dipengaruhi situasi nonmedis. Hal itu disebutkan dalam Pasal 51 Perpres Jaminan Kesehatan diatur ketentuan naik kelas perawatan.
Menurut pasal tersebut, naik kelas perawatan dilakukan dengan cara mengikuti asuransi kesehatan tambahan atau membayar selisih antara biaya yang dijamin oleh BPJS Kesehatan dengan biaya yang harus dibayar akibat peningkatan pelayanan.
Selisih antara biaya yang dijamin oleh BPJS Kesehatan dengan biaya pelayanan dapat dibayar oleh peserta bersangkutan, pemberi kerja, atau asuransi kesehatan tambahan.
Ghufron Mukti juga mengimbau pengelola rumah sakit tidak mengurangi jumlah tempat tidur perawatan pasien dalam upaya memenuhi kriteria KRIS. "Pesan saya jangan dikurangi akses dengan mengurangi jumlah tempat tidur. Pertahankan jumlah tempat tidur dan penuhi persyaratannya dengan 12 kriteria tersebut," tegas Ghufron.
Penulis: Firman Hidranto Redaktur: Ratna Nuraini/Elvira Inda Sari Sumber: Indonesia.go.id
-
@ 3c984938:2ec11289
2024-05-09 04:43:15It's been a journey from the Publishing Forest of Nostr to the open sea of web3. I've come across a beautiful chain of islands and thought. Why not take a break and explore this place? If I'm searching for devs and FOSS, I should search every nook and cranny inside the realm of Nostr. It is quite vast for little old me. I'm just a little hamster and I don't speak in code or binary numbers zeros and ones.
After being in sea for awhile, my heart raced for excitement for what I could find. It seems I wasn't alone, there were others here like me! Let's help spread the message to others about this uncharted realm. See, look at the other sailboats, aren't they pretty? Thanks to some generous donation of SATs, I was able to afford the docking fee.
Ever feel like everyone was going to a party, and you were supposed to dress up, but you missed the memo? Or a comic-con? well, I felt completely underdressed and that's an understatement. Well, turns out there is a some knights around here. Take a peek!
A black cat with a knight passed by very quickly. He was moving too fast for me to track. Where was he going? Then I spotted a group of knights heading in the same direction, so I tagged along. The vibes from these guys was impossible to resist. They were just happy-go-lucky. 🥰They were heading to a tavern on a cliff off the island.
Ehh? a Tavern? Slightly confused, whatever could these knights be doing here? I guess when they're done with their rounds they would here to blow off steam. Things are looking curiouser and curiouser. But the black cat from earlier was here with its rider, whom was dismounting. So you can only guess, where I'm going.
The atmosphere in this pub, was lively and energetic. So many knights spoke among themselves. A group here, another there, but there was one that caught my eye. I went up to a group at a table, whose height towed well above me even when seated. Taking a deep breath, I asked, "Who manages this place?" They unanimous pointed to one waiting for ale at the bar. What was he doing? Watching others talk? How peculiar.
So I went up to him! And introduced myself.
"Hello I'm Onigirl"
"Hello Onigirl, Welcome to Gossip"
"Gossip, what is Gossip?" scratching my head and whiskers.
What is Gossip? Gossip is FOSS and a great client for privacy-centric minded nostriches. It avoids browser tech which by-passes several scripting languages such as JavaScript☕, HTML parsing, rendering, and CSS(Except HTTP GET and Websockets). Using OpenGL-style rendering. For Nostriches that wish to remain anonymous can use Gossip over TOR. Mike recommends using QubesOS, Whonix and or Tails. [FYI-Gossip does not natively support tor SOCKS5 proxy] Most helpful to spill the beans if you're a journalist.
On top of using your nsec or your encryption key, Gossip adds another layer of security over your account with a password login. There's nothing wrong with using the browser extensions (such as nos2x or Flamingo) which makes it super easy to log in to Nostr enable websites, apps, but it does expose you to browser vulnerabilities.
Mike Points out
"people have already had their private key stolen from other nostr clients,"
so it a concern if you value your account. I most certainly care for mine.
Gossip UI has a simple, and clean interface revolving around NIP-65 also called the “Outbox model." As posted from GitHub,
"This NIP allows Clients to connect directly with the most up-to-date relay set from each individual user, eliminating the need of broadcasting events to popular relays."
This eliminates clients that track only a specific set of relays which can congest those relays when you publish your note. Also this can be censored, by using Gossip you can publish notes to alternative relays that have not censored you to reach the same followers.
👉The easiest way to translate that is reducing redundancy to publish to popular relays or centralized relays for content reach to your followers.
Cool! What an awesome client, I mean Tavern! What else does this knight do? He reaches for something in his pocket. what is it? A Pocket is a database for storing and retrieving nostr events but mike's written it in Rust with a few extra kinks inspired by Will's nostrdb. Still in development, but it'll be another tool for you dear user! 💖💕💚
Onigirl is proud to present this knights to the community and honor them with kisu. 💋💋💋 Show some 💖💘💓🧡💙💚
👉💋💋Will - jb55 Lord of apples 💋 @npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s
👉💋💋 Mike Knight - Lord of Security 💋 @npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c
Knights spend a lot of time behind the screen coding for the better of humanity. It is a tough job! Let's appreciate these knights, relay operators, that support this amazing realm of Nostr! FOSS for all!
This article was prompted for the need for privacy and security of your data. They're different, not to be confused.
Recently, Edward Snowden warns Bitcoin devs about the need for privacy, Quote:
“I've been warning Bitcoin developers for ten years that privacy needs to be provided for at the protocol level. This is the final warning. The clock is ticking.”
Snowden’s comments come after heavy actions of enforcement from Samarai Wallet, Roger Ver, Binance’s CZ, and now the closure of Wasabi Wallet. Additionally, according to CryptoBriefing, Trezor is ending it’s CoinJoin integration as well. Many are concerned over the new definition of a money transmitter, which includes even those who don’t touch the funds.
Help your favorite the hamster
^^Me drowning in notes on your feed. I can only eat so many notes to find you.
👉If there are any XMPP fans on here. I'm open to the idea of opening a public channel, so you could follow me on that as a forum-like style. My server of choice would likely be a German server.😀You would be receiving my articles as njump.me style or website-like. GrapeneOS users, you can download Cheogram app from the F-Driod store for free to access. Apple and Andriod users are subjected to pay to download this app, an alternative is ntalk or conversations. If it interests the community, just FYI. Please comment or DM.
👉If you enjoyed this content, please consider reposting/sharing as my content is easily drowned by notes on your feed. You could also join my community under Children_Zone where I post my content.
An alternative is by following #onigirl Just FYI this feature is currently a little buggy.
Follow as I search for tools and awesome devs to help you dear user live a decentralized life as I explore the realm of Nostr.
Thank you Fren
-
@ 20986fb8:cdac21b3
2024-04-30 12:56:52Improving the Availability and Reliability of the Relay Network
Wendy Ding
YakiHonne is committed to creating a censorship-resistant decentralized media. A sufficiently decentralized and immutable storage layer is key to achieving this goal. The relay network based on the Nostr protocol provides an excellent censorship-resistant storage solution. Relay serves as an intermediary in message storage and broadcasting, allowing users to self-host relays and to choose which relays to broadcast information freely. If a relay refuses service or shuts down, other relays can continue propagating the information(fiatjaf.2019). This mechanism turns shadow banning into a "whack-a-mole" game, making it nearly impossible to completely block users through a specific data source, thereby providing a space for free speech for many controversial topics and creators.
Despite providing a simple and effective architecture for social media censorship resistence, existing relay networks face two major challenges to scale and sustain. First, the relay network lacks incentives, without which the stability and availability of the relay network will suffer. The censorship-resistant network relies on numerous distributed and available relays and wider usage of relay to ensure free flow of information(Rabble,2024). Second, although the core function of a relay is to store and distribute information in a decentralized manner, they cannot guarantee the immutability of information. Therefore, relay nodes are able to manipulate or delete information.
In this article, we will focus on how to solve these two issues by introducing economic incentives and attestation mechanism, thereby enhancing the availability and reliability of the relay network, ensuring that it provides solid support for decentralized media.
Nostr Protocol is Decentralized
The design of Nostr involves the separation of user accounts, relays, and clients from each other, free from any entity's control and censorship. Users can host relays, and content can be stored and retrieved across multiple relays. Even if certain relays refuse service or shut down, other relays can still store and propagate information. This differs from Fediverse applications such as Mastodon and Bluesky, and is certainly different from "Web3" social media protocols like Farcaster.
- In Mastodon, user accounts are tied to servers controlled by administrators, thus instance owners can ban users and have the authority to block other instances, implementing censorship (Rozenshtein,2023).
- Although Bluesky promotes itself as an open and decentralized network, it is misleading(Fiatjaf:Bluesky, 2023): Bluesky directly controls the atproto protocol, which allows Bluesky to change the protocol at any time; Bluesky's identity system relies on a central server to maintain and authenticate global IDs, allowing Bluesky to control and potentially ban any user; even if users can host their own content, all content must be distributed through Bluesky's central server. Moreover, Bluesky's design does not encourage or support effective interoperability with other clients. This means if users are dissatisfied with Bluesky, their options are very limited.
- Farcaster relies on large Hubs to store all user data, and these Hubs will grow increasingly larger. These Hubs' power to censor and disseminate data cannot be underestimated. As the network expands, storage demands and costs surge dramatically. It is estimated that if Farcaster's daily active users grow by 5% per week, the cost of running a Hub will reach $3,500 per year by 2024, and will soar to $6.9 million per year by 2027 (Varun,2022). The high operating costs mean that only a few companies can manage Hubs, leading to a decrease in the number of Hubs and an increased risk of network centralization. Additionally, Hub operators may collude to lower the priority of certain content or to censor it (Varun,2022). In contrast, Nostr encourages a mix of large Hubs and smaller relays built for specific purposes,as shown in Figure 1. These small relays can be established by large publishers, small organizations, or hackers, maintaining the network’s decentralization and openness (Hodlbod:Outbox,2024).
*Figure 1. Farcaster Hub vs Nostr Relay
This is indeed the case. Gareth Tyson et al., using a dataset from July 1, 2023, to December 31, 2023, comprising 17.8 million posts, 1.5 million pubkeys, and 712 relays, analyzed the decentralization of Nostr and found that the distribution of posts and users on relays and relay hosting exhibited a high degree of decentralization. This demonstrates that the technical architecture of the Nostr protocol is superior to that of all existing decentralized media protocols.
- Posts and users are not highly concentrated on individual relays but are widely distributed, As shown in Figure 2. 93% of posts can be found across multiple relays, with 178 relays, or 25% of all relays, hosting more than 5% of the posts each. According to user count statistics, even if the top 50 relays were shut down, 90% of the content would still be accessible. Similarly, based on post count statistics, shutting down the top 30 relays would still maintain accessibility to over 90% of the content. Even removing the top 50 relays would still leave 71% of the content accessible, as shown in Figure 3.1.
*Figure 2. The percentage of relays, posts, and users in the top 15 regions and ASes, ranked by the number of relays. Source: Gareth Tyson et al. (2024), "Exploring the Nostr Ecosystem: A Study of Decentralization and Resilience," arXiv preprint arXiv:2402.05709.
- The decentralization of relay hosting across regions and autonomous systems(AS). Relays are distributed across 50 countries and 151 autonomous systems(ASes), as shown in Figure 4. Surprisingly, no single country or autonomous system hosts over 25% of relays. Over 80% of posts remain available after removing the top 10 ASes, as shown in Figure 3.2. Taking Mastodon as an example, post availability drops to less than 10% after removing the top 10 ASes hosting instances (Raman et al., 2019). This is mainly due to the more even distribution of relays across different ASes, making them more resilient to failures in individual AS.
*Figure 3.1 Top X Relays Removed; Figure 3.2 Top X ASes Removed. Source: See Figure 2.
*Figure 4. The distribution of Relay numbers across different countries. Data from Nostr.Watch
Analysis of Relay Availability
Relay is the soul of the Nostr protocol's decentralization. To build a truly usable censorship-resistant relay network, two conditions must be met: relay nodes must be sufficiently distributed and available to ensure the free storage and dissemination of information; even small-scale relay nodes should be widely discovered and utilized.
As of April 23. 2024, there are only 639 relays online globally, a two-thirds reduction from the same period last year, predominantly distributed in North America and Europe, which together host 80% of these relays. Additionally, due to differences in network conditions, the performance of relays varies significantly across regions. For instance, tests in Singapore have shown notable differences in response times among relays in Asia, North America, and Europe, as shown in Figure 5.Moreover, a pronounced head effect is evident , with the top relay hosting 73% of the posts. Although these posts are available across multiple relays, Nostr remains highly decentralized (fiatjaf:Nostr,2024). However, this concentration of usage does not favor the wide discovery and use of smaller relay nodes or the visibility of users, reducing the incentive to build small relay nodes, especially in an ecosystem lacking incentives.
*Figure.5 Relay availability testing. Data from Nostr Watch.
- The reduction in the number of relays and their instability are primarily due to the lack of effective economic incentives (Shinobi:Nostr Scale,2023). Within the Nostr ecosystem, because clients often lack stable income or financial support, it becomes difficult to provide effective incentives for relays. Most relays rely on personal interest or restrictive paid models to maintain operations. These paid models limit specific users' write or even read access, contradicting the initial anti-censorship intent and weakening economic interactions between clients and relays. Currently, 95% of relays struggle to cover operational costs, and 20% have experienced significant downtime due to lack of financial support (Gareth Tyson et al., 2024).
- The discovery and follow mechanisms of relays cause those with higher usage to be more easily discovered by users. These relays are often operated by well-known clients or developers, thereby attracting more users. Ensuring that more small relays are widely discovered and used is a key factor in maintaining the censorship resistance and activity of the relay network. The Nostr ecosystem is working to improve the discovery and follow mechanisms of relays through the Gossip Model, Outbox model, and Blastr. The challenge or key aspect of these models is optimizing the discoverability and coverage among relay users without over-replicating and redundantly retrieving posts, ensuring broad content dissemination (Hodlbod:Outbox,2024). However, achieving this goal requires better collaboration and consensus among Nostr developers. Currently, due to the lack of sufficient incentives, Nostr developers focus more on their own client design ideas, neglecting efficiency and compatibility with various existing user types, and even causing confusion in the development of other clients.
Potential Incentive Measures for Relays
The key to resolving the availability issues of the relay network lies in clearly identifying who will continuously pay for its operational costs. Only when the relay network can be profitable or at least cover its operational costs can it maintain long-term scalability and prosperity. The primary cost of operating a relay comes from server storage expenses. To ensure content availability, content is simultaneously published and repeatedly retrieved across multiple relays(Shinobi:Relay,2023), which further increases traffic consumption and operational costs. However, this is crucial for decentralization and ensuring data reliability. Currently, the main sources of income for relays are donations and paid posts, but 95% of relays struggle to sustain their operational costs through publishing donations. Therefore, this section will primarily explore potential solutions to cover the operational costs of relays.
1. Client Pays for Storage Costs
Clients are encouraged to create paid products or cover relay’s storage costs through their financial budgets. This approach helps in exploring the diversity of monetizing decentralized media and facilitates the formation of economic consensus between relay and its clients, thereby establishing incentive mechanisms for relay, as illustrated in Figure 6.
*Figure 6. Incentive mechanisms between clients and the relays
In a previous article, we discussed two pillars of decentralized media of YakiHonne: decentralized publishing and decentralized review. The former guarantees that content will never be lost; the latter creates a new cost-incentive model to ensure that when content and moderation become permissionless, the platform can still maintain truthfulness and cost-effectiveness. Currently, YakiHonne supports various forms of decentralized publishing, including articles, flash news, curations, videos, and uncensored notes. Content review through uncensored notes further promotes YakiHonne's monetization and decentralization. In YakiHonne, publishing flash news requires a minimum payment of 800 sats, with some revenue used to incentivize relay operations and support uncensored notes. For long-form content, payment of relay's storage fees can be made through subscriptions, advertisements, or even the client's financial budget, as shown in Figure 7.
**Figure 7. Incentive mechanisms between YakiHonne and the relays
2. Direct Income for Relays
In Nostr, the development of relays focuses more on performance optimization rather than the development of new features. Relays could potentially enhance revenue by offering specific relay functions and storing various types of events. Earning income through specific functions requires widespread adoption by clients and relays, otherwise, it may lead to the risk of centralization or implementation failure (Hodlbod: relay Function,2023 ). Earning income by storing different types of events could be a viable option. While maintaining openness to various content, relays can use keyword filters during data retrieval to display specific topics, thus promoting a subscription-based payment model. Additionally, relays and clients can form revenue-sharing agreements on subscription fees, which not only helps increase income but also promotes economic consensus between the two parties.
Attesting content on relays
The redundancy in content storage ensures high availability of content but does not fully guarantee its immutability. In practice, it is possible for relays to tamper with contents or delete information by influencing certain relay nodes. To address this issue, we need to introduce a complementary attestation mechanism to enhance the reliability of the current relay systems.
There are various methods to implement this proof. As the most decentralized social media base-layer protocol, Nostr is well recognized within the Bitcoin community. However, to really use the Bitcoin network to validate information in technical sense has not been disccussed yet. If content hosted on the Nostr network should ever be attested, it should happen on the Bitcoin network.
To attest and validate large amount of content on the Nostr relay network, it's not economically viable to attest every post / every NIP-23 article on the Bitcoin network. Data should be compressed and organized efficiently before being attested. The merkle tree is an efficient and secure data verification mechanism. The accuracy of the data can be verified using only the merkle root and the related hash paths, as shown in Figure 8. Therefore, a final attestation over a large amount of content can be one merkle root submission to the Bitcoin network. Once submitted, all content represented under this merkle root becomes immutable.
**Figure 8. Hashed content on Merkle Tree
When it is necessary to verify whether the content stored on a relay has been unaltered, the content is first retrieved from the relay and its hash value recalculated. Then, using this hash value and other related intermediate hash values, verification proceeds up the Merkle tree until reaching the Merkle root recorded on the Bitcoin blockchain. Finally, by comparing this computed Merkle root with the root hash recorded on the blockchain, the integrity and authenticity of the content are confirmed. The process is depicted in Figure 9.
**Figure 9. Attesting content on relays
Repeating this process will eventually make sure all attested content is genuine. As a result, relays will provide content availability and redundancy, while the Bitcoin network validate Nostr content in batches.
Conclusion
The relay network is one of the core infrastructures that YakiHonne uses to build decentralized media. Through data analysis of Nostr's decentralization features, it has been found that Nostr's technical architecture has made it the most decentralized media protocol currently available. However, due to the lack of effective economic incentives, the relay network faces challenges in availability. This article explores several potential economic incentive models for relays. Despite many uncertainties and challenges, the importance of sustained incentives in combating censorship networks is clear. Additionally, considering the current lack of immutability in the relay network, we propose an attestation mechanism to enhance the reliability of the relay system. Improving relays reliability and availability will help promote the scalability and prosperity of the entire relay ecosystem, providing solid support for building decentralized media.
-
@ 266815e0:6cd408a5
2024-04-24 23:02:21NOTE: this is just a quick technical guide. sorry for the lack of details
Install NodeJS
Download it from the official website https://nodejs.org/en/download
Or use nvm https://github.com/nvm-sh/nvm?tab=readme-ov-file#install--update-script
bash wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash nvm install 20
Clone example config.yml
bash wget https://raw.githubusercontent.com/hzrd149/blossom-server/master/config.example.yml -O config.yml
Modify config.yml
```bash nano config.yml
or if your that type of person
vim config.yml ```
Run blossom-server
```bash npx blossom-server-ts
or install it locally and run using npm
npm install blossom-server-ts ./node_modules/.bin/blossom-server-ts ```
Now you can open http://localhost:3000 and see your blossom server
And if you set the
dashboard.enabled
option in theconfig.yml
you can open http://localhost:3000/admin to see the admin dashboard -
@ 502ab02a:a2860397
2024-04-24 09:27:28หลายๆครั้งที่เรากินเฮฟวีครีมหรือที่บ้านเราเรียกวิปครีม ไม่หมดแถมใกล้จะหมดอายุแล้วด้วย เราสามารถนำมาเรียนรู้กระบวนการผลิตสิ่งที่เรียกว่า ผลิตภัณฑ์จากนม หรือ Dairy Product ได้ครับ แถมเรายังได้ของไว้ใช้ทำอาหารอีกด้วย นั่นคือ การทำครีมให้เป็นเนย แล้วเอาเนยไปทำกี อีกที งว่ออออออ
ผมจะเล่าสิ่งที่ผมทำเล่นๆเป็นงานอดิเรกให้อ่านกันครับ
ส่วนตัวแล้วผมค่อนข้างหลงไหลใน ไขมันเนย และสนุกในการเรียนรู้มันมานานพอสมควร รวมถึงได้คลุกคลีกับอุตสาหกรรมนม มาตั้งแต่ราวๆปี 2001 ที่น่าสนใจคือ สัดส่วนสารอาหารของไขมันเนย ดีงามตั้งแต่เรื่อง กรดไขมันสายสั้น สัดส่วนที่ลงตัวระหว่างโอเมก้า3:6 รวมถึงมีกลิ่นหอมละมุน
กระบวนการทำเนยนั้น เขาก็จะนำนมมาแยกส่วนที่เป็นครีมออกมา เพื่อปั่นแยกไขมันเนยกับเวย์ ในขั้นตอนการแยกครีมนั้น เราจะได้ **ครีม(นมกับไขมันเนย) และ นมพร่องมันเนย (นมที่มีไขมันเนยต่ำ แน่สิมันไปอยู่ในครีมไง) **
ซึ่งนมพร่องมันเนยนี่ละครับ เป็นกระบวนการเดียวกับที่เขานำไปขายเป็นนมไขมันต่ำ หรือ ไม่ก็เอาไปทำนมผง(สกิมมิลค์) . . . แต่ทำความเข้าใจก่อนนะครับว่า บางโรงงานก็ไม่ได้เอาเศษพวกนี้มาทำของขายเรานะ โรงงานนมผงบางที่ก็เป้นการตั้งไลน์ทำขึ้นมาโดยเฉพาะแล้วได้ by product เป็นครีมแทน
คือมันไม่เหมือนกันระหว่าง ตั้งใจทำครีมออกมาขายแล้วได้หางนมมาทำนมผงนมไขมันต่ำ VS ตั้งใจทำนมผงนมไขมันต่ำมาขายแล้วได้เศษเหลือเป็นครีม
มันเป็นแค่กระบวนการที่คล้ายกันคือ มีการแยกครีมกับนม แต่การตั้งเครื่องไม่เหมือนกัน เขาจะตั้งเอาสิ่งที่ต้องการผลิตเป็นหลัก ที่เหลือมันจึงเป็นแค่ by product
โอเคทีนี้กลับมาเรื่องของเราครับ
**
กระบวนการทำเนยคือ
** การเอาครีมมาปั่นในถังปั่น จนมันเกิดการแยกระหว่างไขมันกับเวย์ เรียกว่าการ churning ตั้งแต่โบราณก็ใช้การปั่น แต่วิธีปั่นมันต่างไปเท่านั้นครับ จากปั่นมือ มาเป็นปั่นเครื่อง รสชาติของเนยก็ขึ้นอยู่กับวิธีทำต่างๆ ตั้งแต่โบราณคือนมดิบ, นมที่มีการ culture หรือ ใส่จุลินทรีย์ไปหมักก่อนจะนำมาปั่น เพื่อกลิ่นและรส ปัจจุบันก็อาจมีการพาสเจอร์ไรส์เพราะบางประเทศบังคับว่าต้องพาสเจอร์ไรส์
วันนี้เรามาดูแบบง่ายๆตามที่บอกไว้ข้างบน เราแค่เอาวิปครีม ใส่โถปั่นแล้วก็ให้มันปั่นไปเรื่อยๆ แค่นี้ก็ได้เนยแล้วครับ สำหรับคนที่ทำขนมน่าจะเคยเจอเหตุการณ์ตีวิปนานเกินไปจนจับก้อนแข็ง ถ้าคุณตีต่อไปอีกนั่นละครับ ไอ้เจ้าครีมแข็งนั้นจะกลายเป็นเนยในที่สุด เอาจริงๆการเหวี่ยงก็ได้นะครับ หลายคนเอาใส่ขวดแล้วเขย่า ใครอยากลองก็ลองได้นะ เมื่อยมือเชียว 5555 พอเราได้ก้อนเนยสดๆแล้ว เราก็เอามาใส่ลงในน้ำเย็นจัด ความเย็นจัดจะทำให้เนยสดๆนี้แข็งตัว แล้วเราก็จะปั้นก้อนปั้นรูปทรงให้เก็บง่ายๆได้ ที่นิยมก็ทรงหมูยอ เพราะตัดใช้ง่ายดีครับ
**เรามาดูคลิปที่ผมเคยทำกันครับ จะเห็นภาพว่า ครีมกลายเป็นเนย มันยังไง ** https://youtu.be/bzo7V9n2cxc?si=PsaldIxgKqpiBXgb
ทีนี้เราก็ได้เนยเอาไว้ใช้เองสบายๆละ ใครที่มีนมดิบอยู่แล้วกลัวกินไม่ทัน ก็มีวิธีเพิ่มนิดนึงครับคือ ใส่โถทรงสูงหน่อย เอานมดิบแช่ตู้เย็นราวๆ 18ชั่วโมงขึ้นไป จนมันเริ่มแยกชั้นกัน ความเย็นจะทำให้ไขมันจับตัว มองแล้วคล้ายหัวกะทิ หางกะทิ แล้วเราก็เลือกตักส่วนข้นๆส้วนจับก้อน เอามาใช้แทนครีม วิธีสังเกตุก็ดูความใสของนมเอาครับ ส่วนไขมันจะข้นๆหน่อย พอเริ่มไขมันน้อยก็จะใสครับ ส่วนที่เหลือที่ใสหน่อย นั่นละครับ นมไขมันต่ำ 55555 เอาไปชงกาแฟหรือดื่มได้ต่อไม่มีปัญหาอะไร
**
ทีนี้ กี (ghee) คืออะไร
** กี คือ เนยใส เป็นที่นิยมในอินเดียมากๆ เขาเอามาใช้เป็นน้ำมันในการทำอาหาร ชงเครื่องดื่ม ได้สารพัดอย่างตามที่น้ำมันจะทำได้ จุดเกิดควันสูง ทำให้เกิดการไหม้ ได้ยากกว่าเนยหลายเท่า
วิธีทำกี ก็ง่ายมากๆ แค่เอาเนยมาตั้งเตาด้วยไฟอ่อนๆ อ่อนมากๆนะครับ เพราะถ้าแรงไปนิดเดียว เนยจะไหม้ทันที เราก็กวนเนยไปเรื่อยๆ ความร้อนอ่อนๆที่ต้มเนยนี้มันคือกระบวนการทำให้น้ำระเหยออกไป จนเริ่มเห็นน้ำมันใสๆ เริ่มเห็นการแยกชั้นอีกครั้งนึง ซึ่งไอ้ที่แยกมานี่ละครับ เราเรียกว่า เนื้อนม หรือ solid milk เท่ากับว่าเราแยกองค์ประกอบของเนยออกไปได้ดังนี้ 1.น้ำระเหยไปในอากาศ 2.เนื้อนมแข็งๆคาอยู่ในหม้อ 3.ไขมันเนยใสๆ คาอยู่ในหม้อ
สิ่งที่เราเอามาใช้ก็คือ ไขมันเนย กรอกใส่ขวดหรือกระปุกก็ตามสะดวกใช้ครับ แช่ตู้เย็นเอาไว้ก็ได้ หรือถ้าใช้บ่อย อยู่นอกตู้เย็นก็ได้พักใหญ่ๆเลย เพราะมันไม่มีอะไรให้เสีย เพียงแค่มันจะซับกลิ่นสภาพแวดล้อมได้เก่งหน่อย
ส่วนเนื้อนมเอาไปคลุกน้ำตาลอร่อยครับ 555555
เรามาดูคลิปการทำกี กันครับ ตัวนี้ผมใช้เนยที่ขายทั่วไปพราะตอนนั้นผมทำการทดลอง ให้คนคีโตเห็นว่า อย่าไปอะไรกับเนยมาก ทุกตัวมีเนื้อนมหมดและเนื้อนมเป็นคาร์บ ที่สำคัญเนยที่คีโตนิยมใช้เหลือเนื้อนมมากกว่าเนยที่ราคาถูกกว่าอีก เพราะเนื้อนมคือตัวที่ทำให้เนยมีความละมุนลิ้น ส่วนไขมันเนยคือตัวที่ส่งกลิ่นหอมๆ ลองไปดูคลิปกันครับ
https://youtu.be/HFvvIjhZ6h0?si=KkqoZFN3Mx1lTTul
ทีนี้ของแถม ที่บอกว่าต้มกี ต้องระวังไหม้ จำได้ใช่ไหมครับ ในสายเบเกอรี่ เรามีสิ่งที่เรียกว่า บราวบัทเตอร์ (brown butter) คือการเล่นเสี่ยงอย่างนึงคือ ต้มกี ให้เกินจุดพอดี แต่ไม่เกินไปจนไหม้ เราจะได้เนยใสสีน้ำตาลอ่อน มีกลิ่นหอมคาราเมล ตัวนี้นิยมใช้ทำขนมเพิ่มกลิ่นหอมกว่าการใช้เนยปกติหลายเท่า แถมเป็นกลิ่นหอมหวาน ที่ทำให้ขนมไฮโซขึ้นมาก เอาไปชงกาแฟก็หอมครับผมทำบ่อย
siamstr #pirateketo
-
@ 3c984938:2ec11289
2024-04-16 17:14:58Hello (N)osytrs!
Yes! I'm calling you an (N)oystr!
Why is that? Because you shine, and I'm not just saying that to get more SATs. Ordinary Oysters and mussels can produce these beauties! Nothing seriously unique about them, however, with a little time and love each oyster is capable of creating something truly beautiful. I like believing so, at least, given the fact that you're even reading this article; makes you an (N)oystr! This isn't published this on X (formerly known as Twitter), Facebook, Discord, Telegram, or Instagram, which makes you the rare breed! A pearl indeed! I do have access to those platforms, but why create content on a terrible platform knowing I too could be shut down! Unfortunately, many people still use these platforms. This forces individuals to give up their privacy every day. Meta is leading the charge by forcing users to provide a photo ID for verification in order to use their crappy, obsolete site. If that was not bad enough, imagine if you're having a type of disagreement or opinion. Then, Bigtech can easily deplatform you. Umm. So no open debate? Just instantly shut-off users. Whatever, happened to right to a fair trial? Nope, just burning you at the stake as if you're a witch or warlock!
How heinous are the perpetrators and financiers of this? Well, that's opening another can of worms for you.
Imagine your voice being taken away, like the little mermaid. Ariel was lucky to have a prince, but the majority of us? The likelihood that I would be carried away by the current of the sea during a sunset with a prince on a sailboat is zero. And I live on an island, so I'm just missing the prince, sailboat(though I know where I could go to steal one), and red hair. Oh my gosh, now I feel sad.
I do not have the prince, Bob is better! I do not have mermaid fins, or a shell bra. Use coconut shells, it offers more support! But, I still have my voice and a killer sunset to die for!
All of that is possible thanks to the work of developers. These knights fight for Freedom Tech by utilizing FOSS, which help provides us with a vibrant ecosystem. Unfortunately, I recently learned that they are not all funded. Knights must eat, drink, and have a work space. This space is where they spend most of their sweat equity on an app or software that may and may not pan out. That brilliance is susceptible to fading, as these individuals are not seen but rather stay behind closed doors. What's worse, if these developers lose faith in their project and decide to join forces with Meta! 😖 Does WhatsApp ring a bell?
Without them, I probably wouldn't be able to create this long form article. Let's cheer them on like cheerleaders.. 👉Unfortunately, there's no cheerleader emoji so you'll just have to settle for a dancing lady, n guy. 💃🕺
Semisol said it beautifully, npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj
If we want freedom tech to succeed, the tools that make it possible need to be funded: relays like https://nostr.land, media hosts like https://nostr.build, clients like https://damus.io, etc.
With that thought, Onigirl is pleased to announce the launch of a new series. With a sole focus on free market devs/projects.
Knights of Nostr!
I'll happily brief you about their exciting project and how it benefits humanity! Let's Support these Magnificent projects, devs, relays, and builders! Our first runner up!
Oppa Fishcake :Lord of Media Hosting
npub137c5pd8gmhhe0njtsgwjgunc5xjr2vmzvglkgqs5sjeh972gqqxqjak37w
Oppa Fishcake with his noble steed!
Think of this as an introduction to learn and further your experience on Nostr! New developments and applications are constantly happening on Nostr. It's enough to make one's head spin. I may also cover FOSS projects(outside of Nostr) as they need some love as well! Plus, you can think of it as another tool to add to your decentralized life. I will not be doing how-to-Nostr guides. I personally feel there are plenty of great guides already available! Which I'm happy to add to curation collection via easily searchable on Yakihonne.
For email updates you can subscribe to my [[https://paragraph.xyz/@onigirl]]
If you like it, send me some 🧡💛💚 hearts💜💗💖 otherwise zap dat⚡⚡🍑🍑peach⚡⚡🍑 ~If not me, then at least to our dearest knight!
Thank you from the bottom of my heart for your time and support (N)oystr! Shine bright like a diamond! Share if you care! FOSS power!
Follow on your favorite Nostr Client for the best viewing experience!
[!NOTE]
I'm using Obsidian + Nostr Writer Plugin; a new way to publish Markdown directly to Nostr. I was a little nervous using this because I was used doing them in RStudio; R Markdown.
Since this is my first article, I sent it to my account as a draft to test it. It's pretty neat. -
@ 20986fb8:cdac21b3
2024-03-25 11:26:37Bitcoin secures private property, and Nostr ensures freedom of speech, both pioneering technological solutions. Regarded as on par with Bitcoin, Nostr has emerged as a primary decentralized media protocol within the Bitcoin community.
As one of the most popular decentralized media clients in the Nostr ecosystem, YakiHonne has consistently played a pivotal role as both advocate and practitioner in advancing the global Nostr community. Since the inaugural Asia Nostr gathering held at the Hong Kong Festival in April 2023, YakiHonne's "Connecting Nostriches and Bitcoiners" global initiative has spanned across over 10 countries, including Hong Kong (April), Miami (May), Berlin (June), Beijing/Malaysia (July), Singapore (August), Nigeria/Spain (September), Istanbul/Bali (October), Tokyo (November), and Malta/Hong Kong (December).
This year, YakiHonne is once again launching the "Connecting Bullish Nostriches and Bitcoiners" global event plan, aiming to explore key developmental topics of Nostr, construct a global dissemination pathway for Nostr projects, unite Nostriches, thus creating an autonomous, diverse, and active Nostr global community. This global event will span continents, 16 countries, and 20+ cities, connecting over 5000 Nostr/Bitcoin buidlers.
🔥 Global Event Plan
Notes: The global event is a daunting task, and YakiHonne has currently only listed a few countries. If you wish to initiate events and establish the Nostr community in your country and city, please contact us immediately. Let's work together to promote the development of a decentralized world!
1. Southeast Asia Stop:
- Route: Bali, Bangkok, Vietnam, Kuala Lumpur, Hong Kong.
- Duration: April 10th to May 10th, one stop per week.
- Featured Stop: Hong Kong, Exclusive Nostr side event during the Bitcoin Asia 2024 Conference.
2. United States Stop:
- Route: Austin, Miami, Atlanta, San Francisco, New York, Nashville.
- Duration: May 21st to July 27th, one stop every two weeks.
- Featured Stop: Nashville, The largest Nostr side event during the Bitcoin 2024 Conference.
3. European Stop:
- Route: Munich, London, Prague, Malta, Amsterdam.
- Duration: August 8th to October 10th, one stop every two weeks.
- Featured Stop: Amsterdam, The largest Nostr side event during the Bitcoin 2024 Conference in Europe.
4. Middle East and North Africa Stop:
- Route: Turkey, Morocco, Nigeria, Riyadh, Abu Dhabi.
- Duration: October 24th to December 10th, one stop every two weeks.
- Featured Stop: Abu Dhabi, The largest Nostr side event during the Bitcoin Amsterdam 2024 Conference in the Middle East and North Africa.
🌐 Event Scale
Covering 5 continents, 16 countries, and 20+ cities; comprising 16 offline meetups with approximately 50 attendees each, and 4 special side events with around 200 participants each, connecting over 5000 buidlers, community members, projects, and investors within the Nostr/Bitcoin ecosystem.
🌟 Event Exposure
Broadcasted to 150 countries and accessible in 20 languages, with each event at every stop deeply engaging with local communities and media resources.
🤝 Join Us
1. Become a Partner
If you wish to increase exposure for your brand or provide support for event resources, please DM us immediately!
Tg:@YakiHonne or Nostr npub1yzvxlwp7wawed5vgefwfmugvumtp8c8t0etk3g8sky4n0ndvyxesnxrf8q
As a partner, you will receive brand exposure in Nostr/Bitcoin offline communities across 16 countries, booth displays, speaking opportunities at 4 special themed events, and exclusive media benefits from YakiHonne, including interviews, newsletters, targeted promotions, and access to local community resources.
2. Become a Volunteer
If you want to participate in a global and exciting event, interact with interesting members of the Nostr community, feel free to join our volunteer team! We need volunteers to participate in planning, organizing, setting agendas, contacting local resources, hosting activities, and spreading the word.
💰 Donation
We welcome donations for this global event. Donated funds will be used for event organization. All donors will receive a specially designed Nostr 2024 Global Event unique serial number badge and will be individually showcased in the donor list.
1 - getAlby Lightning Address: - yakihonne@getalby.com
2 - BTC Address and QR code 1E63WaTsdgYnq1A8nwP5D3qdRkJESzKfU6
OR bc1qek6qx0723c64ujqvcaqp96xw3eghkaj3248x2d
3 - directly zap this article or yakihonne profile.
If you are interested in this event and wish to contribute to the development of the Nostr global community, please contact us immediately. Let's work together to build a decentralized world!
About YakiHonne:
YakiHonne is a Nostr-based decentralized content media protocol that supports blogs, flash news, curation, videos, uncensored notes, zaps, and other content types. Join us now and experience the joy of decentralized publishing, review and settlement media networks.
Try YakiHonne.com Now!
Follow us
- Telegram: http://t.me/YakiHonne_Daily_Featured
- Twitter: @YakiHonne
- Nostr pubkey: npub1yzvxlwp7wawed5vgefwfmugvumtp8c8t0etk3g8sky4n0ndvyxesnxrf8q
- 🌟iOS: https://apps.apple.com/mo/app/yakihonne/id6472556189?l=en-GB
- 🌟Android: https://play.google.com/store/apps/details?id=com.yakihonne.yakihonne
- Facebook Profile: https://www.facebook.com/profile.php?id=61551715056704
- Facebook Page: https://www.facebook.com/profile.php?id=61552076811240
- Facebook Group: https://www.facebook.com/groups/720824539860115
-
@ 3bf0c63f:aefa459d
2024-03-23 08:57:08Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 3bf0c63f:aefa459d
2024-03-19 13:07:02Censorship-resistant relay discovery in Nostr
In Nostr is not decentralized nor censorship-resistant I said Nostr is centralized. Peter Todd thinks it is centralized by design, but I disagree.
Nostr wasn't designed to be centralized. The idea was always that clients would follow people in the relays they decided to publish to, even if it was a single-user relay hosted in an island in the middle of the Pacific ocean.
But the Nostr explanations never had any guidance about how to do this, and the protocol itself never had any enforcement mechanisms for any of this (because it would be impossible).
My original idea was that clients would use some undefined combination of relay hints in reply tags and the (now defunct)
kind:2
relay-recommendation events plus some form of manual action ("it looks like Bob is publishing on relay X, do you want to follow him there?") to accomplish this. With the expectation that we would have a better idea of how to properly implement all this with more experience, Branle, my first working client didn't have any of that implemented, instead it used a stupid static list of relays with read/write toggle -- although it did publish relay hints and kept track of those internally and supportedkind:2
events, these things were not really useful.Gossip was the first client to implement a truly censorship-resistant relay discovery mechanism that used NIP-05 hints (originally proposed by Mike Dilger) relay hints and
kind:3
relay lists, and then with the simple insight of NIP-65 that got much better. After seeing it in more concrete terms, it became simpler to reason about it and the approach got popularized as the "gossip model", then implemented in clients like Coracle and Snort.Today when people mention the "gossip model" (or "outbox model") they simply think about NIP-65 though. Which I think is ok, but too restrictive. I still think there is a place for the NIP-05 hints,
nprofile
andnevent
relay hints and specially relay hints in event tags. All these mechanisms are used together in ZBD Social, for example, but I believe also in the clients listed above.I don't think we should stop here, though. I think there are other ways, perhaps drastically different ways, to approach content propagation and relay discovery. I think manual action by users is underrated and could go a long way if presented in a nice UX (not conceived by people that think users are dumb animals), and who knows what. Reliance on third-parties, hardcoded values, social graph, and specially a mix of multiple approaches, is what Nostr needs to be censorship-resistant and what I hope to see in the future.
-
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ 6ad3e2a3:c90b7740
2024-02-10 10:37:19I tend to post what I think, and sometimes people don’t appreciate some of what I’m saying. That’s okay — they are free to unfollow, unsubscribe, mute or even block! But occasionally, they will go farther and try actually to deter me from posting, telling others in the public square what I’m saying is “dangerous” or “harmful”. I’ve even been accused of “killing people” and having a “body count!”
A recent example of this happened a couple months ago when I posted the following observation about the colonoscopy procedure:
Somepeopletook exception to this post, despite the risk/benefit disclaimer, arguing it was dangerous because it could deter people from getting this potentially life-saving procedure. In other words, let’s say colonoscopies save lives, and if 100 people who read my post were convinced not to get one, maybe one of them would get a preventable form of cancer.
I understand the argument, but it’s a terrible one. For starters, each adult human being reading that post has agency. Whether he does or does not do something is never solely because he heard me make an observation. Anyone who reads a tweet from someone neither claiming special expertise in the subject matter nor even purporting to give advice as to risk/benefit and decides not to do something was almost certainly not going to do it anyway for a variety of reasons. I am not a puppet-master of other people with special powers to command them to do things.
To conclude someone is responsible for another adult’s behavior simply because of an observation is to invite all kinds of absurdities. If I post about how crunchy, salty and tangy a particular Doritos flavor is (I do not eat Doritos!), and someone struggling with a junk food addiction reads it and falls off the wagon, am I therefore responsible for the deleterious health effects of his binge?
If posting negative (even though accurate) observations about a health-promoting procedure makes us responsible for someone who subsequently forgoes that procedure, surely posting positive observations about unhealthy behaviors should be viewed similarly. Posting a photo of yourself with a glass of scotch and a cigar in a swanky Vegas lounge with the caption “My happy place” might just be the image that sends an alcoholic reaching for the bottle! Only virtuous, healthful and wholesome messages are permitted!
But could even a picture of yourself in tip-top shape at the beach, surrounded by your healthy and beautiful family cause single people to feel lonely and depressed? Better not post on social media at all. Better not even go out in public if you’re well-dressed and have a healthy and beautiful family. Virtually anything you do could trigger a negative emotion in someone, and who knows where it could lead?
And what if Donald Trump were to say something positive about colonoscopies? Surely, there’s a cohort of people who would reflexively start to question them because everything he says is bad. So even someone posting ostensibly healthful information could be killing people by virtue of their reaction to the poster himself.
But let’s set the absurd implications aside and address the argument on its merits. Even if we concede getting the colonoscopy would have led to early detection and a successful removal of colon cancer (which is not always the case), not getting the colonoscopy is certainly not the proximate cause of the cancer. It might be a but-for cause, but the proximate cause would be some combination of diet, environmental toxins and genetics. Surely, my tweet, no matter how much it stuck with him, cannot be credited with the yeoman’s work of industrial pollution, a pesticide, hormone and antibiotic laden corporate food supply and any risk-augmenting personal behaviors (smoking, eating a poor diet, not exercising) from the person himself.
Further, anyone declining an often free preventative medical procedure probably lacks trust in the medical system generally, and my tweet surely did not cause the medical establishment to offer such consistently wrongheaded advice during the covid era and squander so much of the good will and esteem in which people once held it.
Put differently, my posting about Doritos’ crunch can’t be isolated from the hundreds of millions in marketing from Pepsi-co subsidiary Frito Lay that preceded it.
But even this is not the most pertinent objection to the notion it’s wrong to express earnest observations about medical procedures or lifestyle choices. The primary reason people should feel free to share what they believe or observe is that the science is never all in. Just as eggs were once considered harmful due to their cholesterol content and margarine healthful for its lack thereof, we might one day discover that colonoscopies (and the removal of polyps during the procedure) are themselves sometimes the trigger for colon cancer.
Set aside whether that particular hypothetical mechanism during that particular procedure sounds farfetched, it isn’t hard to imagine more generally that many of the treatments deemed “dangerous” to question might not in fact be net beneficial.
The implications of this are twofold: (1) if someone were on the hook for observations that might deter ostensibly net beneficial procedures, should those procedures later turn out to have been net harmful, those who recommended them would similarly have to be on the hook for those harms. (And I am not talking about the medical establishment that should very much on the hook were that the case, but the ordinary person encouraging his friends and social media followers to follow establishment medical advice.)
And (2) if it were verboten to share earnest observations that contradict the establishment dictum, we would disable the very error-correction mechanism that allows us to update our body of knowledge. Put differently, the examples of expert consensus being wrong and in need of correction span the millennia, from Galileo challenging the geocentric paradigm of the church to Einstein updating Newton’s theories on gravity. Without the ability to question openly the knowledge of the day, whether scientific, medical or political, our myriad errors in attending to human affairs would not merely cause acute harm — they would be permanent.
In short, anyone who discourages you from expressing earnest observations is himself engaging in the most harmful speech of which we can conceive, for though his perceived end might be prevention of local harm, it actually precludes the possibility of progress itself.
-
@ 8ce092d8:950c24ad
2024-02-04 23:35:07Overview
- Introduction
- Model Types
- Training (Data Collection and Config Settings)
- Probability Viewing: AI Inspector
- Match
- Cheat Sheet
I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
Official Resources
- Official Documentation (Must Read): Everything You Need to Know About AI Arena
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game. 2. Official Play-2-Airdrop competition FAQ Site https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0 3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there. 4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
What is this game about?
- Although categorized as a platform fighting game, the core is a probability-based strategy game.
- Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
- The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
- Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.
Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
For all buckets, refer to this official documentation:
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
Video (no sound): The entire training process for all buckets
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
III. Training (Data Collection and Config Settings)
Training Process:
- Collect Data
- Set Parameters, Train, and Save
- Repeat Step 1 until the Model is Complete
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
1. What Kind of Data to Collect
- High-Quality Data: Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
- Balanced Data: Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
- Moderate Amount: A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away. * Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
Movement-Based Actions
Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only. * Note: Action collection should be clean; do not collect actions that involve leaving the stage.
Config Settings
In the Simple Config, you can directly choose "Movement" in it. However, for better customization, it's recommended to use the Advanced Config directly. - Intensity: The method for setting Intensity will be introduced separately later. - Buckets: As shown in the image, choose the bucket you are training. - Focus Area: Position-based parameters: - Your position (must) - Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
Combat-Based Actions
The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.
This involves 5 buckets: - 2 regular situations - In the air: 6 (Safe Zone) - On the ground: 4 (Opponent Active) - 3 special situations on the ground: - 1 Projectile Active - 2 Opponent Knockback - 3 Opponent Stunned
2 Regular Situations
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
Focus Area - Basic (generally select all) - Angle to opponent
- Distance to opponent - Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
3. Setting "Intensity"
Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24): https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
TL;DR:
Epochs: - Adjust to fewer epochs if learning is insufficient, increase for more learning.
Batch Size: - Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast - Increase (e.g., 64) if data is slightly imprecise but balanced. - If both imprecise and unbalanced, consider retraining.
Learning Rate: - Maximize (0.01) for more learning but a risk of forgetting past knowledge. - Minimize for more accurate learning with less impact on previous knowledge.
Lambda: - Reduce for prioritizing learning new things.
Data Cleaning: - Enable "Remove Sparsity" unless you want AI to learn idleness. - For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
Personal Experience: - Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled. - Prioritize Multistream, sometimes use Oversampling. - Fine-tune subsequent training based on the mentioned theories.
IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action." Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state. The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction. It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
Next
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
Explanation: In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield. Source
V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
-
Movement, i.e., Off the Stage: Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
-
Combat: Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
VI. Cheat Sheet
Training 1. Click "Collect" to collect actions. 2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map. 3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller). 4. Use "Collect" to make the opponent loop a set of actions. 5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
Inspector 1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions. 2. When satisfied with the training, click "Save." 3. In "Sparring" and "Simulation," use "Current Working Model." 4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
Sparring / Ranked 1. Use the Throneroom map only for the top 2 or top 10 rankings. 2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
@ 3bf0c63f:aefa459d
2024-01-29 02:19:25Nostr: a quick introduction, attempt #1
Nostr doesn't have a material existence, it is not a website or an app. Nostr is just a description what kind of messages each computer can send to the others and vice-versa. It's a very simple thing, but the fact that such description exists allows different apps to connect to different servers automatically, without people having to talk behind the scenes or sign contracts or anything like that.
When you use a Nostr client that is what happens, your client will connect to a bunch of servers, called relays, and all these relays will speak the same "language" so your client will be able to publish notes to them all and also download notes from other people.
That's basically what Nostr is: this communication layer between the client you run on your phone or desktop computer and the relay that someone else is running on some server somewhere. There is no central authority dictating who can connect to whom or even anyone who knows for sure where each note is stored.
If you think about it, Nostr is very much like the internet itself: there are millions of websites out there, and basically anyone can run a new one, and there are websites that allow you to store and publish your stuff on them.
The added benefit of Nostr is that this unified "language" that all Nostr clients speak allow them to switch very easily and cleanly between relays. So if one relay decides to ban someone that person can switch to publishing to others relays and their audience will quickly follow them there. Likewise, it becomes much easier for relays to impose any restrictions they want on their users: no relay has to uphold a moral ground of "absolute free speech": each relay can decide to delete notes or ban users for no reason, or even only store notes from a preselected set of people and no one will be entitled to complain about that.
There are some bad things about this design: on Nostr there are no guarantees that relays will have the notes you want to read or that they will store the notes you're sending to them. We can't just assume all relays will have everything — much to the contrary, as Nostr grows more relays will exist and people will tend to publishing to a small set of all the relays, so depending on the decisions each client takes when publishing and when fetching notes, users may see a different set of replies to a note, for example, and be confused.
Another problem with the idea of publishing to multiple servers is that they may be run by all sorts of malicious people that may edit your notes. Since no one wants to see garbage published under their name, Nostr fixes that by requiring notes to have a cryptographic signature. This signature is attached to the note and verified by everybody at all times, which ensures the notes weren't tampered (if any part of the note is changed even by a single character that would cause the signature to become invalid and then the note would be dropped). The fix is perfect, except for the fact that it introduces the requirement that each user must now hold this 63-character code that starts with "nsec1", which they must not reveal to anyone. Although annoying, this requirement brings another benefit: that users can automatically have the same identity in many different contexts and even use their Nostr identity to login to non-Nostr websites easily without having to rely on any third-party.
To conclude: Nostr is like the internet (or the internet of some decades ago): a little chaotic, but very open. It is better than the internet because it is structured and actions can be automated, but, like in the internet itself, nothing is guaranteed to work at all times and users many have to do some manual work from time to time to fix things. Plus, there is the cryptographic key stuff, which is painful, but cool.
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente. - **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados. - **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua. - **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer. - **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos. - **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas. - **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade). - **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis. - **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade. - **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho. - **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho. - **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma. - **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças. - **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família. - **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente. - **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo). - **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas. - **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente). - **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica. - **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar. - **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico. - **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais. - **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil). - **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória. - **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles. - **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas. - **inflação**: a inflação estimula o endividamento e o consumismo. - **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas. - **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Anglicismos estúpidos no português contemporâneo
Palavras e expressões que ninguém deveria usar porque não têm o sentido que as pessoas acham que têm, são apenas aportuguesamentos de palavras inglesas que por nuances da história têm um sentido ligeiramente diferente em inglês.
Cada erro é acompanhado também de uma sugestão de como corrigi-lo.
Palavras que existem em português com sentido diferente
- submissão (de trabalhos): envio, apresentação
- disrupção: perturbação
- assumir: considerar, pressupor, presumir
- realizar: perceber
- endereçar: tratar de
- suporte (ao cliente): atendimento
- suportar (uma idéia, um projeto): apoiar, financiar
- suportar (uma função, recurso, característica): oferecer, ser compatível com
- literacia: instrução, alfabetização
- convoluto: complicado.
- acurácia: precisão.
- resiliência: resistência.
Aportuguesamentos desnecessários
- estartar: iniciar, começar
- treidar: negociar, especular
Expressões
- "não é sobre...": "não se trata de..."
Ver também
-
@ c80b5248:6b30d720
2023-07-05 02:40:47I have been sleeping on highlighter. Started testing with nsecBunker and I am seeing how much signal comes through on the global feed here. Still need to understand how to use it better, but this could be a hugely powerful format for information consumption.
Expect this post to be updated... little did I know I was writing my first long-form note! 😅
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ 1967650e:73170f7f
2023-04-19 15:22:48Predicting the price of cryptocurrencies like Bitcoin is an ongoing challenge due to the volatility and unpredictability of the market. In this article, we explore a Python-based price prediction pipeline that combines machine learning techniques and deep learning algorithms to forecast Bitcoin's closing price. The code for this pipeline can be found on GitHub at https://github.com/amar-muratovic/bitcoin-price-prediction-pipeline.
Key Components
-
Data Acquisition and Preprocessing: The pipeline uses the CCXT library to fetch historical price data for Bitcoin (BTC/USD) from the CryptoCompare API. The data is then preprocessed, resampled, and saved into a CSV file for further analysis.
-
Feature Engineering: The pipeline uses three input features - High, Low, and Open prices - and the target variable, which is the Close price.
-
Model Ensemble: The pipeline trains an ensemble of four models: Linear Regression, Bayesian Ridge, Support Vector Regression, and Random Forest Regressor. The predictions from these models are averaged to produce the final forecast.
-
Deep Learning: The pipeline also incorporates a neural network with two hidden layers and early stopping to prevent overfitting. The neural network is trained on a subset of the data.
-
Hyperparameter Tuning: Grid search and cross-validation are used to fine-tune the models and optimize their hyperparameters.
-
Model Evaluation: The pipeline evaluates the models using mean squared error (MSE) and R^2 score, which measure the accuracy of the predictions.
Implementation Details
The pipeline starts by importing necessary libraries and modules, followed by loading the Bitcoin price data from a CSV file. The data is preprocessed, resampled, and saved into a new CSV file. The input features and target variables are defined, and the data is split into training and testing sets.
An ensemble of machine learning models is trained on the data, and predictions are made using these models. The ensemble approach aims to combine the strengths of different models to produce more accurate predictions. The predictions from each model are averaged to produce the final forecast.
A neural network with two hidden layers is created and trained on a subset of the data. Early stopping is used to prevent overfitting by monitoring the validation loss and stopping the training when it stops improving.
Hyperparameter tuning is performed using grid search and cross-validation to optimize the models' performance. This process helps identify the best combination of hyperparameters for each model.
Finally, the models are evaluated using mean squared error (MSE) and R^2 score. These metrics help measure the accuracy of the predictions and the performance of the models.
Conclusion
The Bitcoin price prediction pipeline presented in this article combines various machine learning techniques and deep learning algorithms to forecast the closing price of Bitcoin. This ensemble approach aims to improve prediction accuracy by leveraging the strengths of different models. While predicting the price of cryptocurrencies remains a challenging task, this pipeline provides a solid foundation for further experimentation and improvements. To explore the code further, visit the GitHub repository at https://github.com/amar-muratovic/bitcoin-price-prediction-pipeline.
-
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16bitcoind
decentralizationIt is better to have multiple curator teams, with different vetting processes and release schedules for
bitcoind
than a single one."More eyes on code", "Contribute to Core", "Everybody should audit the code".
All these points repeated again and again fell to Earth on the day it was discovered that Bitcoin Core developers merged a variable name change from "blacklist" to "blocklist" without even discussing or acknowledging the fact that that innocent pull request opened by a sybil account was a social attack.
After a big lot of people manifested their dissatisfaction with that event on Twitter and on GitHub, most Core developers simply ignored everybody's concerns or even personally attacked people who were complaining.
The event has shown that:
1) Bitcoin Core ultimately rests on the hands of a couple maintainers and they decide what goes on the GitHub repository[^pr-merged-very-quickly] and the binary releases that will be downloaded by thousands; 2) Bitcoin Core is susceptible to social attacks; 2) "More eyes on code" don't matter, as these extra eyes can be ignored and dismissed.
Solution:
bitcoind
decentralizationIf usage was spread across 10 different
bitcoind
flavors, the network would be much more resistant to social attacks to a single team.This has nothing to do with the question on if it is better to have multiple different Bitcoin node implementations or not, because here we're basically talking about the same software.
Multiple teams, each with their own release process, their own logo, some subtle changes, or perhaps no changes at all, just a different name for their
bitcoind
flavor, and that's it.Every day or week or month or year, each flavor merges all changes from Bitcoin Core on their own fork. If there's anything suspicious or too leftist (or perhaps too rightist, in case there's a leftist
bitcoind
flavor), maybe they will spot it and not merge.This way we keep the best of both worlds: all software development, bugfixes, improvements goes on Bitcoin Core, other flavors just copy. If there's some non-consensus change whose efficacy is debatable, one of the flavors will merge on their fork and test, and later others -- including Core -- can copy that too. Plus, we get resistant to attacks: in case there is an attack on Bitcoin Core, only 10% of the network would be compromised. the other flavors would be safe.
Run Bitcoin Knots
The first example of a
bitcoind
software that follows Bitcoin Core closely, adds some small changes, but has an independent vetting and release process is Bitcoin Knots, maintained by the incorruptible Luke DashJr.Next time you decide to run
bitcoind
, run Bitcoin Knots instead and contribute tobitcoind
decentralization!
See also:
[^pr-merged-very-quickly]: See PR 20624, for example, a very complicated change that could be introducing bugs or be a deliberate attack, merged in 3 days without time for discussion.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The problem with ION
ION is a DID method based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
One thing I know is:
- ION is supposed to provide a way to rotate keys seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also not a blockchain, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that you can't solve the double-spend problem without a central authority or a kind of blockchain (i.e. a decentralized system with deterministic consensus).
- Rotating keys also suffer from the double-spend problem: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28rosetta.alhur.es
A service that grabs code samples from two chosen languages on RosettaCode and displays them side-by-side.
The code-fetching is done in real time and snippet-by-snippet (there is also a prefetch of which snippets are available in each language, so we only compare apples to apples).
This was my first Golang web application if I remember correctly.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28lnurl-auth explained
You may have seen the lnurl-auth spec or heard about it, but might not know how it works or what is its relationship with other lnurl protocols. This document attempts to solve that.
Relationship between lnurl-auth and other lnurl protocols
First, what is the relationship of lnurl-auth with other lnurl protocols? The answer is none, except the fact that they all share the lnurl format for specifying
https
URLs.In fact, lnurl-auth is very unique in the sense that it doesn't even need a Lightning wallet to work, it is a standalone authentication protocol that can work anywhere.
How does it work
Now, how does it work? The basic idea is that each wallet has a seed, which is a random value (you may think of the BIP39 seed words, for example). Usually from that seed different keys are derived, each of these yielding a Bitcoin address, and also from that same seed may come the keys used to generate and manage Lightning channels.
What lnurl-auth does is to generate a new key from that seed, and from that a new key for each service (identified by its domain) you try to authenticate with.
That way, you effectively have a new identity for each website. Two different services cannot associate your identities.
The flow goes like this: When you visit a website, the website presents you with a QR code containing a callback URL and a challenge. The challenge should be a random value.
When your wallet scans or opens that QR code it uses the domain in the callback URL plus the main lnurl-auth key to derive a key specific for that website, uses that key to sign the challenge and then sends both the public key specific for that for that website plus the signed challenge to the specified URL.
When the service receives the public key it checks it against the challenge signature and start a session for that user. The user is then identified only by its public key. If the service wants it can, of course, request more details from the user, associate it with an internal id or username, it is free to do anything. lnurl-auth's goals end here: no passwords, maximum possible privacy.
FAQ
-
What is the advantage of tying this to Bitcoin and Lightning?
One big advantage is that your wallet is already keeping track of one seed, it is already a precious thing. If you had to keep track of a separate auth seed it would be arguably worse, more difficult to bootstrap the protocol, and arguably one of the reasons similar protocols, past and present, weren't successful.
-
Just signing in to websites? What else is this good for?
No, it can be used for authenticating to installable apps and physical places, as long as there is a service running an HTTP server somewhere to read the signature sent from the wallet. But yes, signing in to websites is the main problem to solve here.
-
Phishing attack! Can a malicious website proxy the QR from a third website and show it to the user to it will steal the signature and be able to login on the third website?
No, because the wallet will only talk to the the callback URL, and it will either be controlled by the third website, so the malicious won't see anything; or it will have a different domain, so the wallet will derive a different key and frustrate the malicious website's plan.
-
I heard SQRL had that same idea and it went nowhere.
Indeed. SQRL in its first version was basically the same thing as lnurl-auth, with one big difference: it was vulnerable to phishing attacks (see above). That was basically the only criticism it got everywhere, so the protocol creators decided to solve that by introducing complexity to the protocol. While they were at it they decided to add more complexity for managing accounts and so many more crap that in the the spec which initially was a single page ended up becoming 136 pages of highly technical gibberish. Then all the initial network effect it had, libraries and apps were trashed and nowadays no one can do anything with it (but, see, there are still people who love the protocol writing in a 90's forum with no clue of anything besides their own Java).
-
We don't need this, we need WebAuthn!
WebAuthn is essentially the same thing as lnurl-auth, but instead of being simple it is complex, instead of being open and decentralized it is centralized in big corporations, and instead of relying on a key generated by your own device it requires an expensive hardware HSM you must buy and trust the manufacturer. If you like WebAuthn and you like Bitcoin you should like lnurl-auth much more.
-
What about BitID?
This is another one that is very similar to lnurl-auth, but without the anti-phishing prevention and extra privacy given by making one different key for each service.
-
What about LSAT?
It doesn't compete with lnurl-auth. LSAT, as far as I understand it, is for when you're buying individual resources from a server, not authenticating as a user. Of course, LSAT can be repurposed as a general authentication tool, but then it will lack features that lnurl-auth has, like the property of having keys generated independently by the user from a common seed and a standard way of passing authentication info from one medium to another (like signing in to a website at the desktop from the mobile phone, for example).
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A biblioteca infinita
Agora esqueci o nome do conto de Jorge Luis Borges em que a tal biblioteca é descrita, ou seus detalhes específicos. Eu tinha lido o conto e nunca havia percebido que ele matava a questão da aleatoriedade ser capaz de produzir coisas valiosas. Precisei mesmo da Wikipédia me dizer isso.
Alguns anos atrás levantei essa questão para um grupo de amigos sem saber que era uma questão tão batida e baixa. No meu exemplo era um cachorro andando sobre letras desenhadas e não um macaco numa máquina de escrever. A minha conclusão da discussão foi que não importa o que o cachorro escrevesse, sem uma inteligência capaz de compreender aquilo nada passaria de letras aleatórias.
Borges resolve tudo imaginando uma biblioteca que contém tudo o que o cachorro havia escrito durante todo o infinito em que fez o experimento, e portanto contém todo o conhecimento sobre tudo e todas as obras literárias possíveis -- mas entre cada página ou frase muito boa ou pelo menos legívei há toneladas de livros completamente aleatórios e uma pessoa pode passar a vida dentro dessa biblioteca que contém tanto conhecimento importante e mesmo assim não aprender nada porque nunca vai achar os livros certos.
Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.
Tenho a impressão de que a publicação gigantesca de artigos, posts, livros e tudo o mais está transformando o mundo nessa biblioteca. Há tanta coisa pra ler que é difícil achar o que presta. As pessoas precisam parar de escrever.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
See also
- Bitcoin, the only non-shitcoin
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Why IPFS cannot work, again
Imagine someone comes up with a solution for P2P content-addressed data-sharing that involves storing all the files' contents in all computers of the network. That wouldn't work, right? Too much data, if you think this can work then you're a BSV enthusiast.
Then someone comes up with the idea of not storing everything in all computers, but only some things on some computers, based on some algorithm to determine what data a node would store given its pubkey or something like that. Still wouldn't work, right? Still too much data no matter how much you spread it, but mostly incentives not aligned, would implode in the first day.
Now imagine someone says they will do the same thing, but instead of storing the full contents each node would only store a pointer to where each data is actually available. Does that make it better? Hardly so. Still, you're just moving the problem.
This is IPFS.
Now you have less data on each computer, but on a global scale that is still a lot of data.
No incentives.
And now you have the problem of finding the data. First if you have some data you want the world to access you have to broadcast information about that, flooding the network -- and everybody has to keep doing this continuously for every single file (or shard of file) that is available.
And then whenever someone wants some data they must find the people who know about that, which means they will flood the network with requests that get passed from peer to peer until they get to the correct peer.
The more you force each peer to store the worse it becomes to run a node and to store data on behalf of others -- but the less your force each peer to store the more flooding you'll have on the global network, and the slower will be for anyone to actually get any file.
But if everybody just saves everything to Infura or Cloudflare then it works, magic decentralized technology.
Related
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Who will build the roads?
Who will build the roads? Em Lagoa Santa, as mais novas e melhores ruas -- que na verdade acabam por formar enormes teias de bairros que se interligam -- são construídas pelos loteadores que querem as ruas para que seus lotes valham mais -- e querem que outras pessoas usem as ruas também. Também são esses mesmos loteadores que colocam os postes de luz e os encanamentos de água, não sem antes terem que se submeter a extorsões de praxe praticadas por COPASA e CEMIG.
Se ao abrir um loteamento, condomínio, prédio um indivíduo ou uma empresa consegue sem muito problema passar rua, eletricidade, água e esgoto, por que não seria possível existir livre-concorrência nesses mercados? Mesmo aquela velha estória de que é ineficiente passar cabos de luz duplicados para que companhias elétricas possam competir já me parece bobagem.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How IPFS is broken
I once fell for this talk about "content-addressing". It sounds very nice. You know a certain file exists, you know there are probably people who have it, but you don't know where or if it is hosted on a domain somewhere. With content-addressing you can just say "start" and the download will start. You don't have to care.
Other magic properties that address common frustrations: webpages don't go offline, links don't break, valuable content always finds its way, other people will distribute your website for you, any content can be transmitted easily to people near you without anyone having to rely on third-party centralized servers.
But you know what? Saying a thing is good doesn't automatically make it possible and working. For example: saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.
And what is the solution for that? A DHT!
DHT?
Turns out DHTs have terrible incentive structure (as you would expect, no one wants to hold and serve data they don't care about to others for free) and the IPFS experience proves it doesn't work even in a small network like the IPFS of today.
If you have run an IPFS client you'll notice how much it clogs your computer. Or maybe you don't, if you are very rich and have a really powerful computer, but still, it's not something suitable to be run on the entire world, and on web pages, and servers, and mobile devices. I imagine there may be a lot of unoptimized code and technical debt responsible for these and other problems, but the DHT is certainly the biggest part of it. IPFS can open up to 1000 connections by default and suck up all your bandwidth -- and that's just for exchanging keys with other DHT peers.
Even if you're in the "client" mode and limit your connections you'll still get overwhelmed by connections that do stuff I don't understand -- and it makes no sense to run an IPFS node as a client, that defeats the entire purpose of making every person host files they have and content-addressability in general, centralizes the network and brings back the dichotomy client/server that IPFS was created to replace.
Connections?
So, DHTs are a fatal flaw for a network that plans to be big and interplanetary. But that's not the only problem.
Finding content on IPFS is the most slow experience ever and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with
scp
-- that's considering that IPFS managed to find the other machine, otherwise your command will just be stuck for days.Now even if you ignore that IPFS objects should be content-addressable and not location-addressable and, knowing which peer has the content you want, you go there and explicitly tell IPFS to connect to the peer directly, maybe you can get some seconds of (slow) download, but then IPFS will drop the connection and the download will stop. Sometimes -- but not always -- it helps to add the peer address to your bootstrap nodes list (but notice this isn't something you should be doing at all).
IPFS Apps?
Now consider the kind of marketing IPFS does: it tells people to build "apps" on IPFS. It sponsors "databases" on top of IPFS. It basically advertises itself as a place where developers can just connect their apps to and all users will automatically be connected to each other, data will be saved somewhere between them all and immediately available, everything will work in a peer-to-peer manner.
Except it doesn't work that way at all. "libp2p", the IPFS library for connecting people, is broken and is rewritten every 6 months, but they keep their beautiful landing pages that say everything works magically and you can just plug it in. I'm not saying they should have everything perfect, but at least they should be honest about what they truly have in place.
It's impossible to connect to other people, after years there's no js-ipfs and go-ipfs interoperability (and yet they advertise there will be python-ipfs, haskell-ipfs, whoknowswhat-ipfs), connections get dropped and many other problems.
So basically all IPFS "apps" out there are just apps that want to connect two peers but can't do it manually because browsers and the IPv4/NAT network don't provide easy ways to do it and WebRTC is hard and requires servers. They have nothing to do with "content-addressing" anything, they are not trying to build "a forest of merkle trees" nor to distribute or archive content so it can be accessed by all. I don't understand why IPFS has changed its core message to this "full-stack p2p network" thing instead of the basic content-addressable idea.
IPNS?
And what about the database stuff? How can you "content-address" a database with values that are supposed to change? Their approach is to just save all values, past and present, and then use new DHT entries to communicate what are the newest value. This is the IPNS thing.
Apparently just after coming up with the idea of content-addressability IPFS folks realized this would never be able to replace the normal internet as no one would even know what kinds of content existed or when some content was updated -- and they didn't want to coexist with the normal internet, they wanted to replace it all because this message is more bold and gets more funding, maybe?
So they invented IPNS, the name system that introduces location-addressability back into the system that was supposed to be only content-addressable.
And how do they manage to do it? Again, DHTs. And does it work? Not really. It's limited, slow, much slower than normal content-addressing fetches, most of the times it doesn't even work after hours. But still although developers will tell it is not working yet the IPFS marketing will talk about it as if it was a thing.
Archiving content?
The main use case I had for IPFS was to store content that I personally cared about and that other people might care too, like old articles from dead websites, and videos, sometimes entire websites before they're taken down.
So I did that. Over many months I've archived stuff on IPFS. The IPFS API and CLI don't make it easy to track where stuff are. The
pin
command doesn't help as it just throws your pinned hash in a sea of hashes and subhashes and you're never able to find again what you have pinned.The IPFS daemon has a fake filesystem that is half-baked in functionality but allows you to locally address things by names in a tree structure. Very hard to update or add new things to it, but still doable. It allows you to give names to hashes, basically. I even began to write a wrapper for it, but suddenly after many weeks of careful content curation and distribution all my entries in the fake filesystem were gone.
Despite not having lost any of the files I did lose everything, as I couldn't find them in the sea of hashes I had in my own computer. After some digging and help from IPFS developers I managed to recover a part of it, but it involved hacks. My things vanished because of a bug at the fake filesystem. The bug was fixed, but soon after I experienced a similar (new) bug. After that I even tried to build a service for hash archival and discovery, but as all the problems listed above began to pile up I eventually gave up. There were also problems of content canonicalization, the code the IPFS daemon use to serve default HTML content over HTTP, problems with the IPFS browser extension and others.
Future-proof?
One of the core advertised features of IPFS was that it made content future-proof. I'm not sure they used this expression, but basically you have content, you hash that, you get an address that never expires for that content, now everybody can refer to the same thing by the same name. Actually, it's better: content is split and hashed in a merkle-tree, so there's fine-grained deduplication, people can store only chunks of files and when a file is to be downloaded lots of people can serve it at the same time, like torrents.
But then come the protocol upgrades. IPFS has used different kinds of hashing algorithms, different ways to format the hashes, and will change the default algorithm for building the merkle-trees, so basically the same content now has a gigantic number of possible names/addresses, which defeats the entire purpose, and yes, files hashed using different strategies aren't automagically compatible.
Actually, the merkle algorithm could have been changed by each person on a file-by-file basis since the beginning (you could for example split a book file by chapter or page instead of by chunks of bytes) -- although probably no one ever did that. I know it's not easy to come up with the perfect hashing strategy in the first go, but the way these matters are being approached make me wonder that IPFS promoters aren't really worried about future-proof, or maybe we're just in Beta phase forever.
Ethereum?
This is also a big problem. IPFS is built by Ethereum enthusiasts. I can't read the mind of people behind IPFS, but I would imagine they have a poor understanding of incentives like the Ethereum people, and they tend towards scammer-like behavior like getting a ton of funds for investors in exchange for promises they don't know they can fulfill (like Filecoin and IPFS itself) based on half-truths, changing stuff in the middle of the road because some top-managers decided they wanted to change (move fast and break things) and squatting fancy names like "distributed web".
The way they market IPFS (which is not the main thing IPFS was initially designed to do) as a "peer-to-peer cloud" is very seductive for Ethereum developers just like Ethereum itself is: as a place somewhere that will run your code for you so you don't have to host a server or have any responsibility, and then Infura will serve the content to everybody. In the same vein, Infura is also hosting and serving IPFS content for Ethereum developers these days for free. Ironically, just like the Ethereum hoax peer-to-peer money, IPFS peer-to-peer network may begin to work better for end users as things get more and more centralized.
More about IPFS problems:
- IPFS problems: Too much immutability
- IPFS problems: General confusion
- IPFS problems: Shitcoinery
- IPFS problems: Community
- IPFS problems: Pinning
- IPFS problems: Conceit
- IPFS problems: Inefficiency
- IPFS problems: Dynamic links
See also
- A crappy course on torrents, on the protocol that has done most things right
- The Tragedy of IPFS in a series of links, an ongoing Twitter thread.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte
Ao descer de seu ônibus na rodoviária de Belo Horizonte às 4 e pouco da manhã, darás de frente para um caubói que toma cerveja em seus trajes típicos em um bar no setor mesmo de desembarque. Suba a escada à direita que dá no estacionamento da rodoviária. Vire à esquerda e caminhe por mais ou menos 400 metros, atravessando uma área onde pessoas suspeitas -- mas provavelmente dormindo em pé -- lhe observam, e então uma pracinha ocupada por um clã de mendigos. Ao avistar um enorme obelisco no meio de um cruzamento de duas avenidas, vire à esquerda e caminhe por mais 400 metros. Você verá uma enorme, antiga e bela estação com uma praça em frente, com belas fontes aqüáticas. Corra dali e dirija-se a um pedaço de rua à direita dessa praça. Um velho palco de antigos carnavais estará colocado mais ou menos no meio da simpática ruazinha de parelepípedos: é onde você pegará seu próximo ônibus.
Para entrar na estação é necessário ter um cartão com créditos recarregáveis. Um viajante prudente deixa sempre um pouco de créditos em seu cartão a fim de evitar filas e outros problemas de indisponibilidade quando chega cansado de viagem, com pressa ou em horários incomuns. Esse tipo de pessoa perceberá que foi totalmente ludibriado ao perceber que que os créditos do seu cartão, abastecido quando de sua última vinda a Belo Horizonte, há três meses, pereceram de prazo de validade e foram absorvidos pelos cofre públicos. Terá, portanto, que comprar mais créditos. O guichê onde os cartões são abastecidos abre às 5h, mas não se espante caso ele não tenha sido aberto ainda quando o primeiro ônibus chegar, às 5h10.
Com alguma sorte, um jovem de moletom, autorizado por dois ou três fiscais do sistema de ônibus que conversam alegremente, será o operador da catraca. Ele deixa entrar sem pagar os bêbados, os malandros, os pivetes. Bastante empático e perceptivo do desespero dos outros, esse bom rapaz provavelmente também lhe deixará entrar sem pagar.
Uma vez dentro do ônibus, não se intimide com os gritalhões e valentões que, ofendidíssimos com o motorista por ele ter parado nas estações, depois dos ônibus anteriores terem ignorado esses excelsos passageiros que nelas aguardavam, vão aos berros tirar satisfação.
O ponto final do ônibus, 40 minutos depois, é o terminal Morro Alto. Lá você verá, se procurar bem entre vários ônibus e pessoas que despertam a sua mais honesta suspeita, um veículo escuro, apagado, numerado 5882 e que abrigará em seu interior um motorista e um cobrador que descansam o sono dos justos.
Aguarde na porta por mais uns vinte minutos até que, repentinamente desperto, o motorista ligue o ônibus, abra as portas e já comece, de leve, a arrancar. Entre correndo, mas espere mais um tempo, enquanto as pessoas que têm o cartão carregado passem e peguem os melhores lugares, até que o cobrador acorde e resolva te cobrar a passagem nesse velho meio de pagamento, outrora o mais líqüído, o dinheiro.
Este último ônibus deverá levar-lhe, enfim, a Lagoa Santa.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Jofer
Jofer era um jogador diferente. À primeira vista não, parecia igual, um volante combativo, perseguia os atacantes adversários implacavelmente, um bom jogador. Mas não era essa a característica que diferenciava Jofer. Jofer era, digamos, um chutador.
Começou numa semifinal de um torneio de juniores. O time de Jofer precisava do empate e estava sofrendo uma baita pressão do adversário, mas o jogo estava 1 a 1 e parecia que ia ficar assim mesmo, daquele jeito futebolístico que parece, parece mesmo. Só que aos 46 do segundo tempo tomaram um gol espírita, Ruizinho do outro time saiu correndo pela esquerda e, mesmo sendo canhoto, foi cortando para o meio, os zagueiros meio que achando que já tinha acabado mesmo, devia ter só mais aquele lance, o árbitro tinha dado dois minutos, Ruizinho chutou, marcou e o goleiro, que só pulou depois que já tinha visto que não ia ter jeito, ficou xingando.
A bola saiu do meio e tocaram para Jofer, ninguém nem veio marcá-lo, o outro time já estava comemorando, e com razão, o juiz estava de sacanagem em fazer o jogo continuar, já estava tudo acabado mesmo. Mas não, estava certo, mais um minuto de acréscimo, justo. Em um minuto dá pra fazer um gol. Mas como? Jofer pensou nas partidas da NBA em que com alguns centésimos de segundo faltando o armador jogava de qualquer jeito para a cesta e às vezes acertava. De trás do meio de campo, será? Não vou ter nem força pra fazer chegar no gol. Vou virar piada, melhor tocar pro Fumaça ali do lado e a gente perde sem essa humilhação no final. Mas, poxa, e daí? Vou tentar mesmo assim, qualquer coisa eu falo que foi um lançamento e daqui a uns dias todo mundo esquece. Olhou para o próprio pé, virou ele de ladinho, pra fora e depois pra dentro (bom, se eu pegar daqui, direitinho, quem sabe?), jogou a bola pro lado e bateu. A bola subiu escandalosamente, muito alta mesmo, deve ter subido uns 200 metros. Jofer não tinha como ter a menor noção. Depois foi descendo, o goleirão voltando correndo para debaixo da trave e olhando pra bola, foi chegando e pulando já só pra acompanhar, para ver, dependurado no travessão, a bola sair ainda bem alta, ela bateu na rede lateral interna antes de bater no chão, quicar violentamente e estufar a rede no alto do lado direito de quem olhava.
Mas isso tudo foi sonho do Jofer. Sonhou acordado, numa noite em que demorou pra dormir, deitado na sua cama. Ficou pensando se não seria fácil, se ele treinasse bastante, acertar o gol bem de longe, tipo no sonho, e se não dava pra fazer gol assim. No dia seguinte perguntou a Brunildinho, o treinador de goleiros. Era difícil defender essas bolas, ainda mais se elas subissem muito, o goleiro ficava sem perspectiva, o vento alterava a trajetória a cada instante, tinha efeito, ela cairia rápido, mas claro que não valia à pena treinar isso, a chance de acertar o gol era minúscula. Mas Jofer só ia tentar depois que treinasse bastante e comprovasse o que na sua imaginação parecia uma excelente idéia.
Começou a treinar todos os dias. Primeiro escondido, por vergonha dos colegas, chegava um pouco antes e ficava lá, chutando do círculo central. Ao menor sinal de gente se aproximando, parava e ia catar as bolas. Depois, quando começou a acertar, perdeu a vergonha. O pessoal do clube todo achava engraçado quando via Jofer treinando e depois ouvia a explicação da boca de alguém, ninguém levava muito a sério, mas também não achava de todo ridículo. O pessoal ria, mas no fundo torcia praquilo dar certo, mesmo.
Aconteceu que num jogo que não valia muita coisa, empatezinho feio, aos 40 do segundo tempo, a marcação dos adversários já não estava mais pressionando, todo mundo contente com o empate e com vontade de parar de jogar já, o Henrique, meia-esquerdo, humilde, mas ainda assim um pouco intimidante para Jofer (jogava demais), tocou pra ele. Vai lá, tenta sua loucura aí. Assumiu a responsabilidade do nosso volante introspectivo. Seria mais verossímil se Jofer tivesse errado, primeira vez que tentou, restava muito tempo ainda pra ele ter a chance de ser herói, ninguém acerta de primeira, mas ele acertou. Quase como no sonho, Lucas, o goleiro, não esperava, depois que viu o lance, riu-se, adiantou-se para pegar a bola que ele julgava que quicaria na área, mas ela foi mais pra frente, mais e mais, daí Lucas já estava correndo, só que começou a pensar que ela ia pra fora, e ele ia só se dependurar no travessão e fazer seu papel de estar na bola. Acabou que por conta daquele gol eles terminaram em segundo no grupo daquele torneiozinho, ao invés de terceiro, e não fez diferença nenhuma.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28My personal experience (as a complete ignorant) of the blocksize debate in 2017
In the beginning of 2017 I didn't know Bitcoin was having a "blocksize debate". I had stopped paying attention to Bitcoin in 2014 after reading Tim Swanson's book on shitcoineiry and was surprise people even care about Bitcoin still while Ethereum and other fancy things were around.
My introduction to the subject was this interview with Andrew Stone and Andrew Clifford from Bitcoin Unlimited (still don't know who these guys are). I've listened to it and kinda liked the conspiracy theory about "a group of developers trying, against miners and users, to control the whole ecosystem by not allowing blocks to grow" (actually, if you listen to this interview that announced the creation of Blockstream and the sidechains whitepaper it does sound like a government agent bribing all the Core developers into forming a consortium that will turn Bitcoin into an Ethereum-like shitcoin under their control -- but this is just a useless digression).
Some time later I listened to this interview with Jimmy Song and was introduced to two hard forks and conspiracies and New York Agreement and got excited because I didn't care about Bitcoin (I'm ashamed to remember this feeling) and wanted to see things changing, people fighting, Bitcoin burning, for no reason. Oddly, what I grasped from the interview was that Jimmy Song was defending the agreement and expecting everybody to fulfill it.
When the day actually come and "Bitcoin Cash" forked I looked at it with pity because it looked clearly a failure from the beginning, but I still cheered for it a bit, still not knowing anything about the debate, besides the fact that blocks were bigger on BCH, which looked like a very reductionist explanation to me.
"Of course it's not just making blocks bigger, that would be too simple, they probably have a very complex plan I'm not apt to understand", I thought.
To my surprise the entire argument was actually just that: bigger blocks bigger blocks. I came to that conclusion by listening to tomwoods.com/1064, a debate in which reasonable arguments faced childish claims. That debate gave me perspective and was a clear, undisputed win from Jameson Lopp against Roger Ver.
Actually some time before that I had listened to another Tom Woods Show episode thinking it was going to be an episode about Bitcoin, but in fact it was just propaganda about a debate I had almost forgotten. And nothing about Bitcoin, everything about "Bitcoin Cash" and how there were two Bitcoins, one legitimate and the other unlegitimate.
So, from the perspective of someone that came to the debate totally fresh and only listens to the big-blocker arguments for a long time, they still don't convince anyone with some common sense (as I would like to think of myself), they just sound like mad dogs and everything goes against themselves.
Fast forward to the present and with much more understanding of the issues in place I started digging some material from 2016-2017 about the debate to try to get more context, and found this ridiculous interview with Mike Hearn. It isn't a waste of time to listen to it if you're not familiar with the debate from that time.
As I should have probably expected from my experience with Epicenter.tv, both the interviewers agree with Mike Hearn about his ridiculous claims about how (not his words) we have to subsidize the few thousand current Bitcoin users by preventing fees from increase and there are no trade-offs to doing that -- and even with everybody agreeing they all manage to sound stupid. There's not a single phrase that is defendable in the entire interview, no criticisms make any sense, it makes me feel bad for the the guy as he feels so self-assured and obviouslyright.
After knowing about these and other adventures of stupid people with high influences in the Bitcoin world trying to impose their idiocy on others it feels even more odd and unexpected to find Bitcoin in the right track. Generally in politics the most dumb wins, but apparently not in Bitcoin.
Bitcoin is a miracle.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The flaw of "just use paypal/coinbase" arguments
For the millionth time I read somewhere that "custodial bitcoin is not bitcoin" and that "if you're going to use custodial, better use Paypal". No, actually it was "better use Coinbase", but I had heard the "PayPal" version in the past.
There are many reasons why using PayPal is not the same as using a custodial Bitcoin service or wallet that are obvious and not relevant here, such as the fact that you can't have Bitcoin balances on Bitcoin (or maybe now you can? but you can't send it around); plus all the reasons that are also valid for Coinbase such as you having to give all your data and selfies of yourself and your government documents and so on -- but let's ignore these reasons for now.
The most important reason why it isn't the same thing is that when you're using Coinbase you are stuck in Coinbase. Your Coinbase coins cannot be used to pay anyone that isn't in Coinbase. So Coinbase-style custodianship doesn't help Bitcoin. If you want to move out of Coinbase you have to withdraw from Coinbase.
Custodianship on Lightning is of a very different nature. You can pay people from other custodial platforms and people that are hosting their own Lightning nodes and so on.
That kind of custodianship doesn't do any harm to anyone, doesn't fracture the network, doesn't reduce the network effect of Lightning, in fact it increases it.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Per-paragraph paywalls
Using the lnurl-allowance protocol, a website could instead of putting a paywall over the entire site, charge a reader for only the paragraphs they read. Of course this requires trust from the reader on the website, but this is normal. The website could just hide the rest of the article before an invoice from the paragraph just read was paid.
This idea came from Colin from the Unhashed Podcast.
Could also work with podcasts and videos.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28SummaDB
This was a hierarchical database server similar to the original Firebase. Records were stored on a LevelDB on different paths, like:
/fruits/banana/color
:yellow
/fruits/banana/flavor
:sweet
And could be queried by path too, using HTTP, for example, a call to
http://hostname:port/fruits/banana
, for example, would return a JSON document likejson { "color": "yellow", "flavor": "sweet" }
While a call to
/fruits
would returnjson { "banana": { "color": "yellow", "flavor": "sweet" } }
POST
,PUT
andPATCH
requests also worked.In some cases the values would be under a special
"_val"
property to disambiguate them from paths. (I may be missing some other details that I forgot.)GraphQL was also supported as a query language, so a query like
graphql query { fruits { banana { color } } }
would return
{"fruits": {"banana": {"color": "yellow"}}}
.SummulaDB
SummulaDB was a browser/JavaScript build of SummaDB. It ran on the same Go code compiled with GopherJS, and using PouchDB as the storage backend, if I remember correctly.
It had replication between browser and server built-in, and one could replicate just subtrees of the main tree, so you could have stuff like this in the server:
json { "users": { "bob": {}, "alice": {} } }
And then only allow Bob to replicate
/users/bob
and Alice to replicate/users/alice
. I am sure the require auth stuff was also built in.There was also a PouchDB plugin to make this process smoother and data access more intuitive (it would hide the
_val
stuff and allow properties to be accessed directly, today I wouldn't waste time working on these hidden magic things).The computed properties complexity
The next step, which I never managed to get fully working and caused me to give it up because of the complexity, was the ability to automatically and dynamically compute materialized properties based on data in the tree.
The idea was partly inspired on CouchDB computed views and how limited they were, I wanted a thing that would be super powerful, like, given
json { "matches": { "1": { "team1": "A", "team2": "B", "score": "2x1", "date": "2020-01-02" }, "1": { "team1": "D", "team2": "C", "score": "3x2", "date": "2020-01-07" } } }
One should be able to add a computed property at
/matches/standings
that computed the scores of all teams after all matches, for example.I tried to complete this in multiple ways but they were all adding much more complexity I could handle. Maybe it would have worked better on a more flexible and powerful and functional language, or if I had more time and patience, or more people.
Screenshots
This is just one very simple unfinished admin frontend client view of the hierarchical dataset.
- https://github.com/fiatjaf/summadb
- https://github.com/fiatjaf/summuladb
- https://github.com/fiatjaf/pouch-summa
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Parallel Chains
We want merged-mined blockchains. We want them because it is possible to do things in them that aren't doable in the normal Bitcoin blockchain because it is rightfully too expensive, but there are other things beside the world money that could benefit from a "distributed ledger" -- just like people believed in 2013 --, like issued assets and domain names (just the most obvious examples).
On the other hand we can't have -- like people believed in 2013 -- a copy of Bitcoin for every little idea with its own native token that is mined by proof-of-work and must get off the ground from being completely valueless into having some value by way of a miracle that operated only once with Bitcoin.
It's also not a good idea to have blockchains with custom merged-mining protocol (like Namecoin and Rootstock) that require Bitcoin miners to run their software and be an active participant and miner for that other network besides Bitcoin, because it's too cumbersome for everybody.
Luckily Ruben Somsen invented this protocol for blind merged-mining that solves the issue above. Although it doesn't solve the fact that each parallel chain still needs some form of "native" token to pay miners -- or it must use another method that doesn't use a native token, such as trusted payments outside the chain.
How does it work
With the
SIGHASH_NOINPUT
/SIGHASH_ANYPREVOUT
soft-fork[^eltoo] it becomes possible to create presigned transactions that aren't related to any previous UTXO.Then you create a long sequence of transactions (sufficient to last for many many years), each with an
nLockTime
of 1 and each spending the next (you create them from the last to the first). Since theirscriptSig
(the unlocking script) will useSIGHASH_ANYPREVOUT
you can obtain a transaction id/hash that doesn't include the previous TXO, you can, for example, in a sequence of transactionsA0-->B
(B spends output 0 from A), include the signature for "spending A0 on B" inside thescriptPubKey
(the locking script) of "A0".With the contraption described above it is possible to make that long string of transactions everybody will know (and know how to generate) but each transaction can only be spent by the next previously decided transaction, no matter what anyone does, and there always must be at least one block of difference between them.
Then you combine it with
RBF
,SIGHASH_SINGLE
andSIGHASH_ANYONECANPAY
so parallel chain miners can add inputs and outputs to be able to compete on fees by including their own outputs and getting change back while at the same time writing a hash of the parallel block in the change output and you get everything working perfectly: everybody trying to spend the same output from the long string, each with a different parallel block hash, only the highest bidder will get the transaction included on the Bitcoin chain and thus only one parallel block will be mined.See also
[^eltoo]: The same thing used in Eltoo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28OP_CHECKTEMPLATEVERIFY
and the "covenants" dramaThere are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Flowi.es
At the time I thought Workflowy had the ideal UI for everything. I wanted to implement my custom app maker on it, but ended up doing this: a platform for enhancing Workflowy with extra features:
- An email reminder based on dates input in items
- A website generator, similar to Websites For Trello, also based on Classless Templates
Also, I didn't remember this was also based on CouchDB and had some couchapp functionalities.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Comprimido desodorante
No episódio sei-lá-qual de Aleixo FM Bruno Aleixo diz que os bêbados sempre têm as melhores idéias e daí conta uma idéia que ele teve quando estava bêbado: um comprimido que funciona como desodorante. Ao invés de passar o desodorante spray ou roll-on a pessoa pode só tomar o comprimido e pronto, é muito mais prático e no tempo de frio a pessoa pode vestir a roupa mais rápido, sem precisar ficar passando nada com o tronco todo nu. Quando o Busto lhe pergunta sobre a possibilidade de algo assim ser fabricado ele diz que não sabe, que não é cientista, só tem as idéias.
Essa passagem tão boba de um programa de humor esconde uma verdade sobre a doutrina cientística que permeia a sociedade. A doutrina segundo a qual é da ciência que vêm as inovações tecnológicas e de todos os tipos, e por isso é preciso que o Estado tire dinheiro das pessoas trabalhadoras e dê para os cientistas. Nesse ponto ninguém mais sabe o que é um cientista, foi-se toda a concretude, ficou só o nome: "cientista". Daí vão procurar o tal cientista, é um cara que se formou numa universidade e está fazendo um mestrado. Pronto, é só dar dinheiro pra esse cara e tudo vai ficar bom.
Tirando o problema da desconexão entre realidade e a tese, existe também, é claro, o problema da tese: não faz sentido, que um cientista fique procurando formas de realizar uma idéia, que não se sabe nem se é possível nem se é desejável, que ele ou outra pessoa tiveram, muito pelo contrário (mas não vou dizer aqui o que é que era para o cientista fazer porque isso seria contraditório e eu não acho que devam nem existir cientistas).
O que eu queria dizer mesmo era: todo o aparato científico da nossa sociedade, todos os departamentos, universidades, orçamentos e bolsas e revistas, tudo se resume a um monte de gente tentando descobrir como fazer um comprimido desodorante.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Zettelkasten
https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125 (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal
neuron
que também gera sites a partir das notas?, acho que vai ser bom. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On the state of programs and browsers
Basically, there are basically (not exhaustively) 2 kinds of programs one can run in a computer nowadays:
1.1. A program that is installed, permanent, has direct access to the Operating System, can draw whatever it wants, modify files, interact with other programs and so on; 1.2. A program that is transient, fetched from someone else's server at run time, interpreted, rendered and executed by another program that bridges the access of that transient program to the OS and other things.
Meanwhile, web browsers have basically (not exhaustively) two use cases:
2.1. Display text, pictures, videos hosted on someone else's computer; 2.2. Execute incredibly complex programs that are fetched at run time, executed and so on -- you get it, it's the same 1.2.
These two use cases for browsers are at big odds with one another. While stretching itsel f to become more and more a platform for programs that can do basically anything (in the 1.1 sense) they are still restricted to being an 1.2 platform. At the same time, websites that were supposed to be on 2.1 sometimes get confused and start acting as if they were 2.2 -- and other confusing mixed up stuff.
I could go hours in philosophical inquiries on the nature of browsers, how rewriting everything in JavaScript is not healthy or where everything went wrong, but I think other people have done this already.
One thing that bothers me a lot, though, is that computers can do a lot of things, and with the internet and in the current state of the technology it's fairly easy to implement tools that would help in many aspects of human existence and provide high-quality, useful programs, with the help of a server to coordinate access, store data, authenticate users and so on many things are possible. However, due to the nature of UI in the browser, it's very hard to get any useful tool to users.
Writing a UI, even the most basic UI imaginable (some text input boxes and some buttons, or a table) can take a long time, always more than the time necessary to code the actual core features of whatever program is being developed -- and that is considering that the person capable of writing interesting programs that do the functionality in the backend are also capable of interacting with JavaScript and the giant amount of frameworks, transpilers, styling stuff, CSS, the fact that all this is built on top of HTML and so on.
This is not good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A big Ethereum problem that is fixed by Drivechain
While reading the following paragraphs, assume Drivechain itself will be a "smart contract platform", like Ethereum. And that it won't be used to launch an Ethereum blockchain copy, but instead each different Ethereum contract could be turned into a different sidechain under BIP300 rules.
A big Ethereum problem
Anyone can publish any "contract" to Ethereum. Often people will come up with somewhat interesting ideas and publish them. Since they want money they will add an unnecessary token and use that to bring revenue to themselves, gamify the usage of their contract somehow, and keep some control over the supposedly open protocol they've created by keeping a majority of the tokens. They will use the profits on marketing and branding, have a visual identity, a central website and a forum with support personnel and so on: their somewhat interesting idea have become a full-fledged company.
If they have success then another company will appear in the space and copy the idea, launch it using exactly the same strategy with a tweak, then try to capture the customers of the first company and new people. And then another, and another, and another. Very often these contracts require some network effect to work, i.e., they require people to be using it so others will use it. The fact that the market is now split into multiple companies offering roughly the same product hurts that, such that none of these protocols get ever enough usage to become really useful in the way they were first conceived. At this point it doesn't matter though, they get some usage, and they use that in their marketing material. It becomes a race to pump the value of the tokens and the current usage is just another point used for that purpose. The company will even start giving out money to attract new users and other weird moves that have no relationship with the initial somewhat intereting idea.
Once in a lifetime it happens that the first implementer of these things is not a company seeking profits, but some altruistic developer or company that believes in Ethereum and wants to see it grow -- or more likely someone financed by the Ethereum Foundation, which allegedly doesn't like these token schemes and would prefer everybody to use the token they issued first, the ETH --, but that's a fruitless enterprise because someone else will copy that idea anyway and turn it into a company as described above.
How Drivechain fixes it
In the Drivechain world, if someone had an idea, they would -- as it happens all the time with Bitcoin things -- publish it in a public forum. Other members of the community would evaluate that idea, add or remove things, all interested parties would contribute to make it the best possible incarnation of that idea. Once the design was settled, someone would volunteer to start writing the code to turn that idea into a sidechain. Maybe some company would fund those efforts and then more people would join. It's not a perfect process and one that often involves altruism, but Bitcoin inspires people to do these things.
Slowly, the thing would get built, tested, activated as a sidechain on testnet, tested more, and at this point luckily the entire community of interested Bitcoin users and miners would have grown to like that idea and see its benefits. It could then be proposed to be activated according to BIP300 rules.
Once it was activated, the entire pool of interested users would join it. And it would be impossible for someone else to create a copy of that because everybody would instantly notice it was a copy. There would be no token, no one profiting directly from the operations of that "smart contract". And everybody would be incentivized to join and tell others to join that same sidechain since the network effect was already the biggest there, they will know more network effect would only be good for everybody involved, and there would be no competing marketing and free token giveaways from competing entities.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On HTLCs and arbiters
This is another attempt and conveying the same information that should be in Lightning and its fake HTLCs. It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
- Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
- Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
- MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28neuron.vim
I started using this neuron thing to create an update this same zettelkasten, but the existing vim plugin had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Module Linker
A browser extension that reads source code on GitHub and tries to find links to imported dependencies so you can click on them and navigate through either GitHub or package repositories or base language documentation. Works for many languages at different levels of completeness.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Using Spacechains and Fedimint to solve scaling
What if instead of trying to create complicated "layer 2" setups involving noveau cryptographic techniques we just did the following:
- we take that Fedimint source code and remove the "mint" stuff, and just use their federation stuff secure coins with multisig;
- then we make a spacechain;
- and we make the federations issue multisig-btc tokens on it;
- and then we put some uniswap-like thing in there to allow these tokens to be exchanged freely.
Why?
The recent spike in fees caused by Ordinals and BRC-20 shitcoinery has shown that Lightning isn't a silver bullet. Channels are too fragile, it costs a lot to open a channel under a high fee environment, to run a routing node and so on.
People who want to keep using Lightning are instead flocking to the big Lightning custodial providers: WalletofSatoshi, ZEBEDEE, OpenNode and so on. We could leverage that trust people have in these companies (and individuals) operating shadow Lightning providers and turn each of these into a btc-token issuer. Each issue their own token, transactions flow freely. Each person can hold only assets from the issuers they trust more.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Just malinvestiment
Traditionally the Austrian Theory of Business Cycles has been explained and reworked in many ways, but the most widely accepted version (or the closest to the Mises or Hayek views) view is that banks (or the central bank) cause the general interest rate to decline by creation of new money and that prompts entrepreneurs to invest in projects of longer duration. This can be confusing because sometimes entrepreneurs embark in very short-time projects during one of these bubbles and still contribute to the overall cycle.
The solution is to think about the "longer term" problem is to think of the entire economy going long-term, not individual entrepreneurs. So if one entrepreneur makes an investiment in a thing that looks simple he may actually, knowingly or not, be inserting himself in a bigger machine that is actually involved in producing longer-term things. Incidentally this thinking also solves the biggest criticism of the Austrian Business Cycle Theory: that of the rational expectations people who say: "oh but can't the entrepreneurs know that the interest rate is artificially low and decide to not make long-term investiments?" ("and if they don't know they should lose money and be replaced like in a normal economy flow blablabla?"). Well, the answer is that they are not really relying on the interest rate, they are only looking for profit opportunities, and this is the key to another confusion that has always followed my thinkings about this topic.
If a guy opens a bar in an area of a town where many new buildings are being built during a "housing bubble" he may not know, but he is inserting himself right into the eye of that business cycle. He expects all these building projects to continue, and all the people involved in that to be getting paid more and be able to spend more at his bar and so on. That is a bet that may or may not end up paying.
Now what does that bar investiment has to do with the interest rate? Nothing. It is just a guy who saw a business opportunity in a place where hungry people with money had no bar to buy things in, so he opened a bar. Additionally the guy has made some calculations about all the ending, starting and future building projects in the area, and then the people that would live or work in that area afterwards (after all the buildings were being built with the expectation of being used) and so on, there is no interest rate calculations involved. And yet that may be a malinvestiment because some building projects will end up being canceled and the expected usage of the finished ones will turn out to be smaller than predicted.
This bubble may have been caused by a decline in interest rates that prompted some people to start buying houses that they wouldn't otherwise, but this is just a small detail. The bubble can only be kept going by a constant influx of new money into the economy, but the focus on the interest rate is wrong. If new money is printed and used by the government to buy ships then there will be a boom and a bubble in the ship market, and that involves all the parts of production process of ships and also bars that will be opened near areas of the town where ships are built and new people are being hired with higher salaries to do things that will eventually contribute to the production of ships that will then be sold to the government.
It's not interest rates or the length of the production process that matters, it's just printed money and malinvestiment.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28hledger-web
A Haskell app that uses Miso and hledger's Haskell libraries plus ghcjs to be compiled to a web page, and then adds optional remoteStorage so you can store your ledger data somewhere else.
This was my introduction to Haskell and also built at a time I thought remoteStorage was a good idea that solved many problems, and that it could use some help in the form of just yet another somewhat-useless-but-cool project using it that could be added to their wiki.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reasons why Lightning is not that great
Some Bitcoiners, me included, were fooled by hyperbolic discourse that presented Lightning as some magical scaling solution with no flaws. This is an attempt to list some of the actual flaws uncovered after 5 years of experience. The point of this article is not to say Lightning is a complete worthless piece of crap, but only to highlight the fact that Bitcoin needs to put more focus on developing and thinking about other scaling solutions (such as Drivechain, less crappy and more decentralized trusted channels networks and statechains).
Unbearable experience
Maintaining a node is cumbersome, you have to deal with closed channels, allocating funds, paying fees unpredictably, choosing new channels to open, storing channel state backups -- or you'll have to delegate all these decisions to some weird AI or third-party services, it's not feasible for normal people.
Channels fail for no good reason all the time
Every time nodes disagree on anything they close channels, there have been dozens, maybe hundreds, of bugs that lead to channels being closed in the past, and implementors have been fixing these bugs, but since these node implementations continue to be worked on and new features continue to be added we can be quite sure that new bugs continue to be introduced.
Trimmed (fake) HTLCs are not sound protocol design
What would you tell me if I presented a protocol that allowed for transfers of users' funds across a network of channels and that these channels would pledge to send the money to miners while the payment was in flight, and that these payments could never be recovered if a node in the middle of the hop had a bug or decided to stop responding? Or that the receiver could receive your payment, but still claim he didn't, and you couldn't prove that at all?
These are the properties of "trimmed HTLCs", HTLCs that are uneconomical to have their own UTXO in the channel presigned transaction bundles, therefore are just assumed to be there while they are not (and their amounts are instead added to the fees of the presigned transaction).
Trimmed HTLCs, like any other HTLC, have timelocks, preimages and hashes associated with them -- which are properties relevant to the redemption of actual HTLCs onchain --, but unlike actual HTLCs these things have no actual onchain meaning since there is no onchain UTXO associated with them. This is a game of make-believe that only "works" because (1) payment proofs aren't worth anything anyway, so it makes no sense to steal these; (2) channels are too expensive to setup; (3) all Lightning Network users are honest; (4) there are so many bugs and confusion in a Lightning Network node's life that events related to trimmed HTLCs do not get noticed by users.
Also, so far these trimmed HTLCs have only been used for very small payments (although very small payments probably account for 99% of the total payments), so it is supposedly "fine" to have them. But, as fees rise, more and more HTLCs tend to become fake, which may make people question the sanity of the design.
Tadge Dryja, one of the creators of the Lightning Network proposal, has been critical of the fact that these things were allowed to creep into the BOLT protocol.
Routing
Routing is already very bad today even though most nodes have a basically 100% view of the public network, the reasons being that some nodes are offline, others are on Tor and unreachable or too slow, channels have the balance shifted in the wrong direction, so payments fail a lot -- which leads to the (bad) solution invented by professional node runners and large businesses of probing the network constantly in order to discard bad paths, this creates unnecessary load and increases the risk of channels being dropped for no good reason.
As the network grows -- if it indeed grow and not centralize in a few hubs -- routing tends to become harder and harder.
While each implementation team makes their own decisions with regard to how to best way to route payments and these decisions may change at anytime, it's worth noting, for example, that CLN will use MPP to split up any payment in any number of chunks of 10k satoshis, supposedly to improve routing success rates. While this often backfires and causes payments to fail when they should have succeeded, it also contributes to making it so there are proportionally more fake HTLCs than there should be, as long as the threshold for fake HTLCs is above 10k.
Payment proofs are somewhat useless
Even though payment proofs were seen by many (including me) as one of the great things about Lightning, the sad fact is that they do not work as proofs if people are not aware of the fact that they are proofs. Wallets do all they can to hide these details from users because it is considered "bad UX" and low-level implementors do not care very much to talk about them at all. There have been attempts from Lightning Labs to get rid of the payment proofs entirely (which at the time to me sounded like a terrible idea, but now I realize they were not wrong).
Here's a piece of anecdote: I've personally witnessed multiple episodes in which Phoenix wallet released the preimage without having actually received the payment (they did receive a minor part of the payment, but the payment was split in many parts). That caused my service, @lntxbot, to mark the outgoing payment as complete, only then to have to endure complaints from the users because the receiver side, Phoenix, had not received the full amount. In these cases, if the protocol and the idea of preimages as payment proofs be respected, should I have been the one in charge of manually fixing user balances?
Another important detail: when an HTLC is sent and then something goes wrong with the payment the channel has to be closed in order to redeem that payment. When the redeemer is on the receiver side, the very act of redeeming should cause the preimage to be revealed and a proof of payment to be made available for the sender, who can then send that back to the previous hop and the payment is proven without any doubt. But when this happens for fake HTLCs (which is the vast majority of payments, as noted above) there is no place in the world for a preimage and therefore there are no proofs available. A channel is just closed, the payer loses money but can't prove a payment. It also can't send that proof back to the previous hop so he is forced to say the payment failed -- even if it wasn't him the one who declared that hop a failure and closed the channel, which should be a prerequisite. I wonder if this isn't the source of multiple bugs in implementations that cause channels to be closed unnecessarily. The point is: preimages and payment proofs are mostly a fiction.
Another important fact is that the proofs do not really prove anything if the keypair that signs the invoice can't be provably attached to a real world entity.
LSP-centric design
The first Lightning wallets to show up in the market, LND as a desktop daemon (then later with some GUIs on top of it like Zap and Joule) and Anton's BLW and Eclair wallets for mobile devices, then later LND-based mobile wallets like Blixt and RawTX, were all standalone wallets that were self-sufficient and meant to be run directly by consumers. Eventually, though, came Breez and Phoenix and introduced the "LSP" model, in which a server would be trusted in various forms -- not directly with users' funds, but with their privacy, fees and other details -- but most importantly that LSP would be the primary source of channels for all users of that given wallet software. This was all fine, but as time passed new features were designed and implemented that assumed users would be running software connected to LSPs. The very idea of a user having a standalone mobile wallet was put out of question. The entire argument for implementation of the bolt12 standard, for example, hinged on the assumption that mobile wallets would have LSPs capable of connecting to Google messaging services and being able to "wake up" mobile wallets in order for them to receive payments. Other ideas, like a complicated standard for allowing mobile wallets to receive payments without having to be online all the time, just assume LSPs always exist; and changes to the expected BOLT spec behavior with regards to, for example, probing of mobile wallets.
Ark is another example of a kind of LSP that got so enshrined that it become a new protocol that depends on it entirely.
Protocol complexity
Even though the general idea of how Lightning is supposed to work can be understood by many people (as long as these people know how Bitcoin works) the Lightning protocol is not really easy: it will take a long time of big dedication for anyone to understand the details about the BOLTs -- this is a bad thing if we want a world of users that have at least an idea of what they are doing. Moreover, with each new cool idea someone has that gets adopted by the protocol leaders, it increases in complexity and some of the implementors are kicked out of the circle, therefore making it easier for the remaining ones to proceed with more and more complexity. It's the same process by which Chrome won the browser wars, kicked out all competitors and proceeded to make a supposedly open protocol, but one that no one can implement as it gets new and more complex features every day, all envisioned by the Chrome team.
Liquidity issues?
I don't believe these are a real problem if all the other things worked, but still the old criticism that Lightning requires parking liquidity and that has a cost is not a complete non-issue, specially given the LSP-centric model.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Carl R. Rogers sobre a ciência
Creio que o objetivo primário da ciência é fornecer uma hipótese, uma convicção e uma fé mais seguras e que satisfaçam melhor o próprio investigador. Na medida em que o cientista procura provar qualquer coisa a alguém -- um erro em que incorri mais de uma vez --, creio que ele está se servindo da ciência para remediar uma insegurança pessoal, desviando-a do seu verdadeiro papel criativo a serviço do indivíduo.
Tornar-se Pessoa, página aleatória
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28contratos.alhur.es
A website that allowed people to fill a form and get a standard Contrato de Locação.
Better than all the other "templates" that float around the internet, which are badly formatted
.doc
files.It was fully programmable so other templates could be added later, but I never did. This website made maybe one dollar in Google Ads (and Google has probably stolen these like so many other dollars they did with their bizarre requirements).
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28comentário pertinente de Olavo de Carvalho sobre atribuições indevidas de acontecimentos à "ordem espontânea"
Eis aqui um exemplo entre outros mil, extraído das minhas apostilas de aulas, de como se analisam as relações entre fatores deliberados e casuais na ação histórica. O sr, Beltrão está INFINITAMENTE ABAIXO da possibilidade de discutir essas coisas, e por isso mesmo me atribui uma simploriedade que é dele próprio e não minha:
Já citei mil vezes este parágrafo de Georg Jellinek e vou citá-lo de novo: “Os fenômenos da vida social dividem-se em duas classes: aqueles que são determinados essencialmente por uma vontade diretriz e aqueles que existem ou podem existir sem uma organização devida a atos de vontade. Os primeiros estão submetidos necessariamente a um plano, a uma ordem emanada de uma vontade consciente, em oposição aos segundos, cuja ordenação repousa em forças bem diferentes.”
Essa distinção é crucial para os historiadores e os analistas estratégicos não porque ela é clara em todos os casos, mas precisamente porque não o é. O erro mais comum nessa ordem de estudos reside em atribuir a uma intenção consciente aquilo que resulta de uma descontrolada e às vezes incontrolável combinação de forças, ou, inversamente, em não conseguir enxergar, por trás de uma constelação aparentemente fortuita de circunstâncias, a inteligência que planejou e dirigiu sutilmente o curso dos acontecimentos.
Exemplo do primeiro erro são os Protocolos dos Sábios de Sião, que enxergam por trás de praticamente tudo o que acontece de mau no mundo a premeditação maligna de um número reduzidos de pessoas, uma elite judaica reunida secretamente em algum lugar incerto e não sabido.
O que torna essa fantasia especialmente convincente, decorrido algum tempo da sua publicação, é que alguns dos acontecimentos ali previstos se realizam bem diante dos nossos olhos. O leitor apressado vê nisso uma confirmação, saltando imprudentemente da observação do fato à imputação da autoria. Sim, algumas das idéias anunciadas nos Protocolos foram realizadas, mas não por uma elite distintamente judaica nem muito menos em proveito dos judeus, cuja papel na maioria dos casos consistiu eminentemente em pagar o pato. Muitos grupos ricos e poderosos têm ambições de dominação global e, uma vez publicado o livro, que em certos trechos tem lances de autêntica genialidade estratégica de tipo maquiavélico, era praticamente impossível que nada aprendessem com ele e não tentassem por em prática alguns dos seus esquemas, com a vantagem adicional de que estes já vinham com um bode expiatório pré-fabricado. Também é impossível que no meio ou no topo desses grupos não exista nenhum judeu de origem. Basta portanto um pouquinho de seletividade deformante para trocar a causa pelo efeito e o inocente pelo culpado.
Mas o erro mais comum hoje em dia não é esse. É o contrário: é a recusa obstinada de enxergar alguma premeditação, alguma autoria, mesmo por trás de acontecimentos notavelmente convergentes que, sem isso, teriam de ser explicados pela forca mágica das coincidências, pela ação de anjos e demônios, pela "mão invisível" das forças de mercado ou por hipotéticas “leis da História” ou “constantes sociológicas” jamais provadas, que na imaginação do observador dirigem tudo anonimamente e sem intervenção humana.
As causas geradoras desse erro são, grosso modo:
Primeira: Reduzir as ações humanas a efeitos de forças impessoais e anônimas requer o uso de conceitos genéricos abstratos que dão automaticamente a esse tipo de abordagem a aparência de coisa muito científica. Muito mais científica, para o observador leigo, do que a paciente e meticulosa reconstituição histórica das cadeias de fatos que, sob um véu de confusão, remontam às vezes a uma autoria inicial discreta e quase imperceptível. Como o estudo dos fenômenos histórico-políticos é cada vez mais uma ocupação acadêmica cujo sucesso depende de verbas, patrocínios, respaldo na mídia popular e boas relações com o establishment, é quase inevitável que, diante de uma questão dessa ordem, poucos resistam à tentação de matar logo o problema com duas ou três generalizações elegantes e brilhar como sábios de ocasião em vez de dar-se o trabalho de rastreamentos históricos que podem exigir décadas de pesquisa.
Segunda: Qualquer grupo ou entidade que se aventure a ações histórico-políticas de longo prazo tem de possuir não só os meios de empreendê-las, mas também, necessariamente, os meios de controlar a sua repercussão pública, acentuando o que lhe convém e encobrindo o que possa abortar os resultados pretendidos. Isso implica intervenções vastas, profundas e duradouras no ambiente mental. [Etc. etc. etc.]
(no facebook em 17 de julho de 2013)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Multi-service Graph Reputation protocol
The problem
- Users inside centralized services need to know reputations of other users they're interacting with;
- Building reputation with ratings imposes a big burden on the user and still accomplishes nothing, can be faked, no one cares about these ratings etc.
The ideal solution
Subjective reputation: reputation based on how you rated that person previously, and how other people you trust rated that person, and how other people trusted by people you trust rated that person and so on, in a web-of-trust that actually can give you some insight on the trustworthiness of someone you never met or interacted with.
The problem with the ideal solution
- Most of the times the service that wants to implement this is not as big as Facebook, so it won't have enough people in it for such graphs of reputation to be constructed.
- It is not trivial to build.
My proposed solution:
I've drafted a protocol for an open system based on services publishing their internal reputation records and indexers using these to build graphs, and then serving the graphs back to the services so they can show them to users when it is needed (as HTTP APIs that can be called directly from the user client app or browser).
Crucially, these indexers will gather data from multiple services and cross-link users from these services so the graph is better.
https://github.com/fiatjaf/multi-service-reputation-rfc
The first and single actionable and useful feedback I got, from @bootstrapbandit was that services shouldn't share email addresses in plain text (email addresses and other external relationships users of a service may have are necessary to establish links from users accross services), but I think it is ok if services publish hashes of these email addresses instead. At some point I will update the spec draft and that may have been before the time you're reading this.
Another issue is that services may lie about their reputation records and that will hurt other services and users in these other services that are relying on that data. Maybe indexers will have to do some investigative job here to assert service honesty. Or maybe this entire protocol is just failed and we will actually need a system in which users themselves will publish their own records.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Splitpages
The simplest possible service: it splitted PDF pages in half.
Created specially to solve the problem of those scanned books that come with two pages side-by-side as if they were a single page and are much harder to read on Kindle because of that.
It required me to learn about Heroku Buildpacks though, and fork or contribute to a Heroku Buildpack that embedded a mupdf binary.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Idéia de um sistema jurídico centralizado, mas com um pouco de lógica
um processo, é, essencialmente, imagino eu na minha ingenuidade leiga, um apelo que se faz ao juiz para que este reconheça certos fatos como probantes de um certo fenômeno tipificado por uma certa lei.
imagino então o seguinte:
uma petição não é mais um enorme documento escrito numa linguagem nojenta com referências a leis e a evidências factuais espalhadas segundo a (in) capacidade ensaística do advogado, mas apenas um esquema lógico - talvez até um diagrama desenhado (ou talvez quem sabe uma série de instruções compreensíveis por um computador?) - mostrando a ligação entre a lei e os fatos e os pedidos, por exemplo:
- a lei tal diz que ninguém pode vender
- fulano vendeu cigarros
- é prova de que fulano vendeu cigarros ia foto tirada na rua tal no dia tal que mostra fulano vendendo cigarros
- a mesma lei pede que fulano pague uma multa
este exemplo está ainda muito verborrágico, mas é só um exemplo simples. coisas mais complicadas precisariam de outras formas de expressão caso queiramos evitar as longas dissertações jurídicas em voga.
a idéia é que o esquema acima vale por si. um proto-juiz pode julgá-lo como válido ou inválido apenas pela sua lógica interna.
a outra parte do julgamento seria a ligação desse esquema com a realidade externa: anexados à petição viriam as evidências. no caso, anexada ao ponto 3 viria uma foto do fulano. ao ponto 1 também precisa ser anexado o texto da lei referida, mas isto pode ser feito automaticamente pelo número da lei.
uma vez que tenhamos um esquema lógico válido um outro proto-juiz, ou vários outros, pode julgar individualmente cada evidência: ver se o texto da lei confere com a interpretação feita no ponto 1, e se a foto anexada ao ponto 3 é mesmo a foto do réu vendendo cigarro e não a de um urso comendo laranjas.
cada um desses julgamentos pode ser feito sem que o proto-juiz tenha conhecimento do resto das coisas do processo: o primeiro proto-juiz não precisa ver a foto ou a lei, o segundo não precisa ver o esquema lógico ou a foto, o terceiro não precisa ver a lei nem o esquema lógico, e mesmo assim teríamos um julgamento de procedência ou não da petição ao final, o mais impessoal e provavelmente o mais justo possível.
a defesa consistiria em apontar erros no esquema lógico ou falhas no nexo entre a realidade é o esquema. por exemplo:
- uma foto assim não é uma prova de que fulano vendeu, ele podia estar só passando lá perto.
- ele estava de fato só passando lá perto. do que é prova este documento mostrando seu comparecimento a uma aula do curso de direito da UFMG no mesmo horário.
perdoem-me se estiver falando besteira, mas são 5h e estou ainda dormindo. obviamente há vários pontos problemáticos aí, e quero entendê-los, mas a forma geral me parece bem razoável.
o que descrevi acima é uma proposta, digamos, de sistema jurídico que não se diferencia em nada do nosso sistema jurídico atual, exceto na forma (não no sentido escolástico). é também uma tentativa de compreender sua essência.
as vantagens desse formato ao atual são muitas:
- menos papel, coisas pra ler, repetição infinita de citações legais e longuíssimas dissertações escritas por advogados analfabetos que destroem a língua e a inteligência de todos
- diminuição drástica do tempo gasto por cada juiz em cada processo
- diminuição do poder de cada juiz (se cada ato de julgamento humano necessário em cada processo pode ser feito por qualquer juiz, sem conhecimento dos outros aspectos do mesmo processo, tudo é muito mais rápido, e cada julgamento desses pode ser feito por vários juízes diferentes, escolhidos aleatoriamente)
- diminuição da pomposidade de casa juiz: com menos poder e obrigações maus simples, um juiz não precisa ser mais uma pessoa especial que ganha milhões, pode ser uma pessoa comum, um proto-juiz, ganhando menos (o que possibilitaria até ter mais desses e aumentar a confiabilidade de cada julgamento)
- os juízes podem trabalhar da casa deles e a qualquer momento
- passa a ter sentido a existência de um sistema digital de processos (porque é ridículo que o sistema digital atual seja só uma forma de passar documentos do Word de um lado para o outro)
- o fim das audiências de conciliação, que são uma monstruosidade criada apenas pela necessidade de diminuir a quantidade de processos em tramitação e acabam retirandobo sentido da justiça (as partes são levemente pressionadas a ignorar a validade ou não das suas posições e fazer um acordo, sob pena de o juiz ficar com raiva delas depois)
milhares de precauções devem ser tomadas caso um sistema desses vá ser implantado (ahahah), talvez manter uma forma de julgamento tradicional, de corpo presente e com um juiz ou júri que tem conhecimento de toda situação, mas apenas para processos que chegarem até certo ponto, e assim por diante.
Ver também
- P2P reputation thing para um fundamento de um sistema jurídico anárquico.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Why I don't like NIP-26 as a solution for key management
NIP-26 was created out of the needs of the Nostr integration at https://minds.com/. They wanted Minds users to be able to associate their "custodial" Nostr key with an external self-owned key. NIP-26 looked like a nice fit for the job, because it would allow supporting clients to associate the two identities statelessly (i.e. by just seeing one event published by Minds but with a delegation tag on it the client would be able to associate that with the self-owned external key without anything else[^1]).
The big selling point of NIP-26 (to me) was that it was fully optional. Clients were free to not implement it and they would not suffer much. They would just see "bob@minds.com" published this, and "bob-self-owned" published that. They would probably know intuitively that these two were the same person, or not, but it wouldn't be an issue. Both would still be identified as Bob and have a picture, a history and so on. Moreover, this wasn't expected to happen a lot, it would be mostly for the small intersection of people that wanted to have their own keys and also happened to be using one of these "custodial Nostr" platforms like Minds.
At some point, though, NIP-26 started to be seen as the solution for key management on Nostr. The idea is that someone will generate a very safe key on a hardware device and guard it as their most precious treasure without it ever touching the internet, and use it just to sign delegation tags. Then use multiple of these delegation tags, one for each different Nostr app, and maybe rotate them every month or so, details are unclear.
This breaks the previous expectations I had for NIP-26 entirely, as now these keys become faceless entities that can't be associated with anything except their "master" key (the one that is in cold storage). So in a world in which most Nostr users are using NIP-26 for everything, clients that do not implement NIP-26 become completely useless, as all they will see is a constant stream of random keys. They won't be able to follow anyone or interact with anyone, as these keys will not identify any concrete person on their back, they will vanish all the time and new keys will show up and the world will be chaotic. So now every client must implement NIP-26 to become usable at all, it is not optional anymore.
You may argue that making NIP-26 a de facto mandatory NIP isn't a bad thing and is worth the cost, but I think it breaks a lot of the simplicity of the protocol. It would probably be worth the cost if we knew NIP-26 was an actual complete solution, but it definitely is not, it is partial, and not the most elegant thing in the world. I think key management can be solved in multiple different ways that can all work together or not, but most importantly they can all remain optional.
More thoughts on these multiple ways can be found at Thoughts on Nostr key management.
If I am wrong about all this and we really come to the conclusion that we need a de facto mandatory key delegation method for Nostr, so be it -- but in that case, considering that we will break backwards-compatibility anyway, I think there might be a better design than NIP-26, more optimized and easier to implement, I don't know how exactly. But I really think we shouldn't rush that.
[^1]: as opposed to other suggestions that would also work, but that would require dealing with multiple events -- for example, the external user could publish a new replaceable event -- or use
kind:0
-- to say they wanted to grandfather the Minds key into their umbrella, while the Minds key would also need to signal its acceptance of that. This also had the problem of requiring changes every time a new replaceable event of such kind was found. Although I am unsure now, at the time me and William agreed this was worse than NIP-26 with the delegation tag. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28LessPass remoteStorage
LessPass is a nice idea: a password manager without any state. Just remember one master password and you can generate a different one for every site using the power of hashes.
But it has a very bad issue: some sites require just numbers, others have a minimum or maximum character limits, some require non-letter characters, uppercase characters, others forbid these and so on.
The solution: to allow you to specify parameters when generating the password so you can fit a generated password on every service.
The problem with the solution: it creates state. Now you must remember what parameters you used when generating a password for each site.
This was a way to store these settings on a remoteStorage bucket. Since it isn't confidential information in any way, that wasn't a problem, and I thought it was a good fit for remoteStorage.
Some time later I realized it maybe would be better to have a centralized repository hosting all weird requirements for passwords each domain forced on its users, and let LessPass use data from that central place when generating a password. Still stateful, not ideal, not very far from a centralized password manager, but still requiring less trust and less cryptographic assumptions.
- https://github.com/fiatjaf/lesspass-remotestorage
- https://addons.mozilla.org/firefox/addon/lesspass-remotestorage/
- https://chrome.google.com/webstore/detail/lesspass-remotestorage/aogdpopejodechblppdkpiimchbmdcmc
- https://lesspass.alhur.es/
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Classless Templates
There are way too many hours being wasted in making themes for blogs. And then comes a new blog framework, it requires new themes. Old themes can't be used because they relied on different ways of rendering the website. Everything is a mess.
Classless was an attempt at solving it. It probably didn't work because I wasn't the best person to make themes and showcase the thing.
Basically everybody would agree on a simple HTML template that could fit blogs and simple websites very easily. Then other people would make pure-CSS themes expecting that template to be in place.
No classes were needed, only a fixed structure of
header
.main
,article
etc.With flexbox and grid CSS was enough to make this happen.
The templates that were available were all ported by me from other templates I saw on the web, and there was a simple one I created for my old website.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28doulas.club
A full catalog of all Brazilian doulas with data carefully scrapped from many websites that contained partial catalogs and some data manually included. All this packaged as a Couchapp and served directly from Cloudant.
This was done because the idea of doulas was good, but I spotted an issue: pregnant womwn should know many doulas before choosing one that would match well, therefore a full catalog with a lot of information was necessary.
This was a huge amount of work mostly wasted.
Many doulas who knew about this didn't like it and sent angry and offensive emails telling me to remove them. This was information one should know before choosing a doula.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28jiq
When someone created
jiq
claiming it had "jq queries" I went to inspect and realized it didn't, it just had a poor simple JSON query language that implemented 1% of alljq
features, so I forked it and pluggedjq
directly into it, and renamed tojiq
.After some comments on issues in the original repository from people complaining about lack of
jq
compatibility it got a ton of unexpected users, was even packaged to ArchLinux. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Criteria for activating Drivechain on Bitcoin
Drivechain is, in essence, just a way to give Bitcoin users the option to deposit their coins in a hashrate escrow. If Bitcoin is about coin ownership, in theory there should be no objection from anyone on users having the option to do that: my keys, my coins etc. In other words: even if you think hashrate escrows are a terrible idea and miners will steal all coins from that, you shouldn't care about what other people do with their own money.
There are only two reasonable objections that could be raised by normal Bitcoin users against Drivechain:
- Drivechain adds code complexity to
bitcoind
- Drivechain perverts miner incentives of the Bitcoin chain
If these two objections can be reasonably answered there remains no reason for not activating the Drivechain soft-fork.
1
To address 1 we can just take a look at the code once it's done (which I haven't) but from my understanding the extra validation steps needed for ensuring hashrate escrows work are very minimal and self-contained, they shouldn't affect anything else and the risks of introducing some catastrophic bug are roughly zero (or the same as the risks of any of the dozens of refactors that happen every week on Bitcoin Core).
For the BMM/BIP-301 part, again the surface is very small, but we arguably do not need that at all, since anyprevout (once that is merged) enables blind merge-mining in way that is probably better than BIP-301, and that soft-fork is also very simple, plus already loved and accepted by most of the Bitcoin community, implemented and reviewed on Bitcoin Inquisition and is live on the official Bitcoin Core signet.
2
To address 2 we must only point that BMM ensures that Bitcoin miners don't have to do any extra work to earn basically all the fees that would come from the sidechain, as competition for mining sidechain blocks would bid the fee paid to Bitcoin miners up to the maximum economical amount. It is irrelevant if there is MEV on the sidechain or not, everything that reaches the Bitcoin chain does that in form of fees paid in a single high-fee transaction paid to any Bitcoin miner, regardless of them knowing about the sidechain or not. Therefore, there are no centralization pressure or pervert mining incentives that can affect Bitcoin land.
Sometimes it's argued that Drivechain may facilitate the ocurrence of a transaction paying a fee so high it would create incentives for reorging the Bitcoin chain. There is no reason to believe Drivechain would make this more likely than an actual attack than anyone can already do today or, as has happened, some rich person typing numbers wrong on his wallet. In fact, if a drivechain is consistently paying high fees on its BMM transactions that is an incentive for Bitcoin miners to keep mining those transactions one after the other and not harm the users of sidechain by reorging Bitcoin.
Moreover, there are many factors that exist today that can be seen as centralization vectors for Bitcoin mining: arguably one of them is non-blind merge mining, of which we have a (very convoluted) example on the Stacks shitcoin, and introducing the possibility of blind merge-mining on Bitcoin would basically remove any reasonable argument for having such schemes, therefore reducing the centralizing factor of them.
- Drivechain adds code complexity to
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28"Você só aprendeu mesmo uma coisa quando consegue explicar para os outros"
Mentira. Tá certo que existe um ponto em que você acha que sabe algo mas não consegue explicar, mas não necessariamente isso significa não saber. Conseguir explicar não depende de saber, mas de verbalizar. Podemos saber muitas coisas sem as conseguir verbalizar. Aliás, para a maior parte das experiências humanas verbalizar é que é a parte difícil. Por último, é importante dizer que a verbalização é uma abstração e portanto quando alguém tenta explicar algo e se força a fazer uma abstração está arriscando substituir a experiência concreta ou mesmo o conhecimento difuso de algo por aquela abstração e com isso ficar mais burro -- me parece que esse é risco é maior quanto mais prematura for a tentativa de explicação e quando mais sucesso a abstração improvisada fizer.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Dynamic links
Content-addressability is cool and we all like it, but we all also know we can't live in a world without readable and memorizeable names. IPFS has proposed IPNS, the Interplanetary Name System (the names are very cool indeed), since its beggining (or maybe it was some months after the first IPFS idea, which would indicate this problem arrived as an afterthought).
It has been said IPNS would work in a manner similar to Git heads and branches (this was probably part of Juan Benet's marketing pitches that were immensely repeated in the first years, that IPFS was a mix between Torrents, Git and some other cool technology). This remains a distant promise, however. IPNS has been, for the last years, a very slow, unrecommended by their own developers and unusable way of addressing content that is basically just a pointer from a public key to an object hash.
Recommendations fall on using a domain and dnslink, the way to tell IPFS nodes you own a domain and that can be used to identify an object hash. That works, but it is not the wonder of decentralization that was promised, and still, it's just a pointer. Any key-value store, database of filesystem can do pointers.
Here I'm not saying, like tons of stupid people have on the internet, that IPFS should support dynamic links so we can build web apps on it. No, I would be pretty fine with just static links for static content, and continue to use the other internet protocols for things that needed to be dynamic.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Rumple
a payments network based on trust channels
This is the description of a Lightning-like network that will work only with credit or trust-based channels and exist alongside the normal Lightning Network. I imagine some people will think this is undesirable and at the same time very easy to do (such that if it doesn't exist yet it must be because no one cares), but in fact it is a very desirable thing -- which I hope I can establish below -- and at the same time a very non-trivial problem to solve, as the history of Ryan Fugger's Ripple project and posterior copies of it show.
Read these first to get the full context:
- Ryan Fugger's Ripple
- Ripple and the problem of the decentralized commit
- The Lightning Network solves the problem of the decentralized commit
- Parallel Chains
Explanation about the name
Since we're copying the fundamental Ripple idea from Ryan Fugger and since the name "Ripple" is now associated with a scam coin called XRP, and since Ryan Fugger has changed the name of his old website "Ripplepay" to "Rumplepay", we will follow his lead here. If "Ripplepay" was the name of a centralized prototype to the open peer-to-peer network "Ripple", now that the centralized version is called "Rumplepay" the peer-to-peer version must be called "Rumple".
Now the idea
Basically we copy the Lightning Network, but without HTLCs or channels being opened and closed with funds committed to them on multisig Bitcoin transactions published to the blockchain. Instead we use pure trust relationships like the original Ripple concept.
And we use the blockchain commit method, but instead of spending an absurd amount of money to use the actual Bitcoin blockchain instead we use a parallel chain.
How exactly -- a protocol proposal attempt
It could work like this:
The parallel chain, or "Rumple Chain"
- We define a parallel chain with a genesis block;
- Following blocks must contain
a. the ID of the previous block; b. a list of up to 32768 entries of arbitrary 32-byte values; c. an ID constituted by sha256(the previous block ID + the merkle root of all the entries)
- To be mined, each parallel block must be included in the Bitcoin chain according as explained above.
Now that we have a structure for a simple "blockchain" that is completely useless, just blocks over blocks of meaningless values, we proceed to the next step of assigning meaning to these values.
The off-chain payments network, or "Rumple Network"
- We create a network of nodes that can talk to each other via TCP messages (all details are the same as the Lightning Network, except where mentioned otherwise);
- These nodes can create trust channels to each other. These channels are backed by nothing except the willingness of one peer to pay the other what is owed.
- When Alice creates a trust channel with Bob (
Alice trusts Bob
), contrary to what happens in the Lightning Network, it's A that can immediately receive payments through that channel, and everything A receives will be an IOU from Bob to Alice. So Alice should never open a channel to Bob unless Alice trusts Bob. But also Alice can choose the amount of trust it has in Bob, she can, for example, open a very small channel with Bob, which means she will only lose a few satoshis if Bob decides to exit scam her. (in the original Ripple examples these channels were always depicted as friend relationships, and they can continue being that, but it's expected -- given the experience of the Lightning Network -- that the bulk of the channels will exist between users and wallet provider nodes that will act as hubs). - As Alice receive a payment through her channel with Bob, she becomes a creditor and Bob a debtor, i.e., the balance of the channel moves a little to her side. Now she can use these funds to make payments over that channel (or make a payment that combines funds from multiple channels using MPP).
- If at any time Alice decides to close her channel with Bob, she can send all the funds she has standing there to somewhere else (for example, another channel she has with someone else, another wallet somewhere else, a shop that is selling some good or service, or a service that will aggregate all funds from all her channels and send a transaction to the Bitcoin chain on her behalf).
- If at any time Bob leaves the network Alice is entitled by Bob's cryptographic signatures to knock on his door and demand payment, or go to a judge and ask him to force Bob to pay, or share the signatures and commitments online and hurt Bob's reputation with the rest of the network (but yes, none of these things is good enough and if Bob is a very dishonest person none of these things is likely to save Alice's funds).
The payment flow
- Suppose there exists a route
Alice->Bob->Carol
and Alice wants to send a payment to Carol. - First Alice reads an invoice she received from Carol. The invoice (which can be pretty similar or maybe even the same as BOLT11) contains a payment hash
h
and information about how to reach Carol's node, optionally an amount. Let's say it's 100 satoshis. - Using the routing information she gathered, Alice builds an onion and sends it to Bob, at the same time she offers to Bob a "conditional IOU". That stands for a signed commitment that Alice will owe Bob an 100 satoshis if in the next 50 blocks of the Rumple Chain there appears a block containing the preimage
p
such thatsha256(p) == h
. - Bob peels the onion and discovers that he must forward that payment to Carol, so he forwards the peeled onion and offers a conditional IOU to Carol with the same
h
. Bob doesn't know Carol is the final recipient of the payment, it could potentially go on and on. - When Carol gets the conditional IOU from Bob, she makes a list of all the nodes who have announced themselves as miners (which is not something I have mentioned before, but nodes that are acting as miners will must announce themselves somehow) and are online and bidding for the next Rumple block. Each of these miners will have previously published a random 32-byte value
v
they they intend to include in their next block. - Carol sends payments through routes to all (or a big number) of these miners, but this time the conditional IOU contains two conditions (values that must appear in a block for the IOU to be valid):
p
such thatsha256(p) == h
(the same that featured in the invoice) andv
(which must be unique and constant for each miner, something that is easily verifiable by Carol beforehand). Also, instead of these conditions being valid for the next 50 blocks they are valid only for the single next block. - Now Carol broadcasts
p
to the mempool and hopes one of the miners to which she sent conditional payments sees it and, allured by the possibility of cashing in Carol's payment, includesp
in the next block. If that does not happen, Carol can try again in the next block.
Why bother with this at all?
-
The biggest advantage of Lightning is its openness
It has been said multiple times that if trust is involved then we don't need Lightning, we can use Coinbase, or worse, Paypal. This is very wrong. Lightning is good specially because it serves as a bridge between Coinbase, Paypal, other custodial provider and someone running their own node. All these can transact freely across the network and pay each other without worrying about who is in which provider or setup.
Rumple inherits that openness. In a Rumple Network anyone is free to open new trust channels and immediately route payments to anyone else.
Also, since Rumple payments are also based on the reveal of a preimage it can do swaps with Lightning inside a payment route from day one (by which I mean one can pay from Rumple to Lightning and vice-versa).
-
Rumple fixes Lightning's fragility
Lightning is too fragile.
It's known that Lightning is vulnerable to multiple attacks -- like the flood-and-loot attack, for example, although not an attack that's easy to execute, it's still dangerous even if failed. Given the existence of these attacks, it's important to not ever open channels with random anonymous people. Some degree of trust must exist between peers.
But one does not even have to consider attacks. The creation of HTLCs is a liability that every node has to do multiple times during its life. Every initiated, received or forwarded payment require adding one HTLC then removing it from the commitment transaction.
Another issue that makes trust needed between peers is the fact that channels can be closed unilaterally. Although this is a feature, it is also a bug when considering high-fee environments. Imagine you pay $2 in fees to open a channel, your peer may close that unilaterally in the next second and then you have to pay another $15 to close the channel. The opener pays (this is also a feature that can double as a bug by itself). Even if it's not you opening the channel, a peer can open a channel with you, make a payment, then clone the channel, and now you're left with, say, an output of 800 satoshis, which is equal to zero if network fees are high.
So you should only open channels with people you know and know aren't going to actively try to hack you and people who are not going to close channels and impose unnecessary costs on you. But even considering a fully trusted Lightning Network, even if -- to be extreme -- you only opened channels with yourself, these channels would still be fragile. If some HTLC gets stuck for any reason (peer offline or some weird small incompatibility between node softwares) and you're forced to close the channel because of that, there are the extra costs of sweeping these UTXO outputs plus the total costs of closing and reopening a channel that shouldn't have been closed in the first place. Even if HTLCs don't get stuck, a fee renegotiation event during a mempool spike may cause channels to force-close, become valueless or settle for very high closing fee.
Some of these issues are mitigated by Eltoo, others by only having channels with people you trust. Others referenced above, plus the the griefing attack and in general the ability of anyone to spam the network for free with payments that can be pending forever or a lot of payments fail repeatedly makes it very fragile.
Rumple solves most of these problems by not having to touch the blockchain at all. Fee negotiation makes no sense. Opening and closing channels is free. Flood-and-loot is a non-issue. The griefing attack can be still attempted as funds in trust channels must be reserved like on Lightning, but since there should be no theoretical limit to the number of prepared payments a channel can have, the griefing must rely on actual amounts being committed, which prevents large attacks from being performed easily.
-
Rumple fixes Lightning's unsolvable reputation issues
In the Lightning Conference 2019, Rusty Russell promised there would be pre-payments on Lightning someday, since everybody was aware of potential spam issues and pre-payments would be the way to solve that. Fast-forward to November 2020 and these pre-payments have become an apparently unsolvable problem[^thread-402]: no one knows how to implement them reliably without destroying privacy completely or introducing worse problems.
Replacing these payments with tables of reputation between peers is also an unsolved problem[^reputation-lightning], for the same reasons explained in the thread above.
-
Rumple solves the hot wallet problem
Since you don't have to use Bitcoin keys or sign transactions with a Rumple node, only your channel trust is at risk at any time.
-
Rumple ends custodianship
Since no one is storing other people's funds, a big hub or wallet provider can be used in multiple payment routes, but it cannot be immediately classified as a "custodian". At best, it will be a big debtor.
-
Rumple is fun
Opening channels with strangers is boring. Opening channels with friends and people you trust even a little makes that relationship grow stronger and the trust be reinforced. (But of course, like it happens in the Lightning Network today, if Rumple is successful the bulk of trust will be from isolated users to big reliable hubs.)
Questions or potential issues
-
So many advantages, yes, but trusted? Custodial? That's easy and stupid!
Well, an enormous part of the current Lightning Network (and also onchain Bitcoin wallets) already rests on trust, mainly trust between users and custodial wallet providers like ZEBEDEE, Alby, Wallet-of-Satoshi and others. Worse: on the current Lightning Network users not only trust, they also expose their entire transaction history to these providers[^hosted-channels].
Besides that, as detailed in point 3 of the previous section, there are many unsolvable issues on the Lightning protocol that make each sovereign node dependent on some level of trust in its peers (and the network in general dependent on trusting that no one else will spam it to death).
So, given the current state of the Lightning Network, to trust peers like Rumple requires is not a giant change -- but it is still a significant change: in Rumple you shouldn't open a large trust channel with someone just because it looks trustworthy, you must personally know that person and only put in what you're willing to lose. In known brands that have reputation to lose you can probably deposit more trust, same for long-term friends, and that's all. Still it is probably good enough, given the existence of MPP payments and the fact that the purpose of Rumple is to be a payments network for day-to-day purchases and not a way to buy real estate.
-
Why would anyone run a node in this parallel chain?
I don't know. Ideally every server running a Rumple Network node will be running a Bitcoin node and a Rumple chain node. Besides using it to confirm and publish your own Rumple Network transactions it can be set to do BMM mining automatically and maybe earn some small fees comparable to running a Lightning routing node or a JoinMarket yield generator.
Also it will probably be very lightweight, as pruning is completely free and no verification-since-the-genesis-block will take place.
-
What is the maturity of the debt that exists in the Rumple Network or its legal status?
By default it is to be understood as being payable on demand for payments occurring inside the network (as credit can be used to forward or initiate payments by the creditor using that channel). But details of settlement outside the network or what happens if one of the peers disappears cannot be enforced or specified by the network.
Perhaps some standard optional settlement methods (like a Bitcoin address) can be announced and negotiated upon channel creation inside the protocol, but nothing more than that.
[^thread-402]: Read at least the first 10 messages of the thread to see how naïve proposals like you and me could have thought about are brought up and then dismantled very carefully by the group of people most committed to getting Lightning to work properly. [^reputation-lightning]: See also the footnote at Ripple and the problem of the decentralized commit. [^hosted-channels]: Although that second part can be solved by hosted channels.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Motte-and-bailey
há aqui um artigo, escrito por um sujeito provavelmente esquerdista, ateu e tal, que toca num ponto importante sobre o discurso dessas esquerdas defensoras de minorias.
ele introduz brevemente o conceito da motte-and-bailey doctrine, que é um nome bacana que deram para a estratégia que esses conhecidos e amigos seus de esquerda usam para dizer os maiores absurdos na internet e em grupos fechados esquerdistas, mas, quando confrontados por você, parecem ser mansos, inteligentes e gente boa, como você sempre esperou que fossem.
o sujeito é meio confuso, mas o fato mesmo de ele ser esquerdista valida mais ainda o argumento.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Token-Curated Registries
So you want to build a TCR?
TCRs (Token Curated Registries) are a construct for maintaining registries on Ethereum. Imagine you have lots of scissor brands and you want a list with only the good scissors. You want to make sure only the good scissors make into that list and not the bad scissors. For that, people will tell you, you can just create a TCR of the best scissors!
It works like this: some people have the token, let's call it Scissor Token. Some other person, let's say it's a scissor manufacturer, wants to put his scissor on the list, this guy must acquire some Scissor Tokens and "stake" it. Holders of the Scissor Tokens are allowed to vote on "yes" or "no". If "no", the manufactures loses his tokens to the holders, if "yes" then its tokens are kept in deposit, but his scissor brand gets accepted into the registry.
Such a simple process, they say, have strong incentives for being the best possible way of curating a registry of scissors: consumers have the incentive to consult the list because of its high quality; manufacturers have the incentive to buy tokens and apply to join the list because the list is so well-curated and consumers always consult it; token holders want the registry to accept good and reject bad scissors because that good decisions will make the list good for consumers and thus their tokens more valuable, bad decisions will do the contrary. It doesn't make sense, to reject everybody just to grab their tokens, because that would create an incentive against people trying to enter the list.
Amazing! How come such a simple system of voting has such enourmous features? Now we can have lists of everything so well-curated, and for that we just need Ethereum tokens!
Now let's imagine a different proposal, of my own creation: SPCR, Single-person curated registries.
Single-person Curated Registries are equal to TCR, except they don't use Ethereum tokens, it's just a list in a text file kept by a single person. People can apply to join, and they will have to give the single person some amount of money, the single person can reject or accept the proposal and so on.
Now let's look at the incentives of SPCR: people will want to consult the registry because it is so well curated; vendors will want to enter the registry because people are consulting it; the single person will want to accept the good and reject the bad applicants because these good decisions are what will make the list valuable.
Amazing! How such a single proposal has such enourmous features! SPCR are going to take over the internet!
What TCR enthusiasts get wrong?
TCR people think they can just list a set of incentives for something to work and assume that something will work. Mix that with Ethereum hype and they think theyve found something unique and revolutionary, while in fact they're just making a poor implementation of "democracy" systems that fail almost everywhere.
The life is not about listing a set of "incentives" and then considering the problems solved. Almost everybody on the Earth has the incentive for being rich: being rich has a lot of advantages over being poor, however not all people get rich! Why are the incentives failing?
Curating lists is a hard problem, it involves a lot of knowledge about the problem that just holding a token won't give you, it involves personal preferences, politics, it involves knowing where is the real limit between "good" and "bad". The Single Person list may have a good result if the single person doing the curation is knowledgeable and honest (yes, you can game the system to accept your uncle's scissors and not their competitor that is much better, for example, without losing the entire list reputation), same thing for TCRs, but it can also fail miserably, and it can appear to be good but be in fact not so good. In all cases, the list entries will reflect the preferences of people choosing and other things that aren't taken into the incentives equation of TCR enthusiasts.
We don't need lists
The most important point to be made, although unrelated to the incentive story, is that we don't need lists. Imagine you're looking for a scissor. You don't want someone to tell if scissor A or B are "good" or "bad", or if A is "better" than B. You want to know if, for your specific situation, or for a class of situations, A will serve well, and do that considering A's price and if A is being sold near you and all that.
Scissors are the worst example ever to make this point, but I hope you get it. If you don't, try imagining the same example with schools, doctors, plumbers, food, whatever.
Recommendation systems are badly needed in our world, and TCRs don't solve these at all.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: "numbeo" with satoshis
This site has a crowdsourced database of cost-of-living in many countries and cities: https://www.numbeo.com/cost-of-living/ and it sells the data people write there freely. It's wrong!
Could be an fruitful idea to pay satoshis for people to provide data.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lightning and its fake HTLCs
Lightning is terrible but can be very good with two tweaks.
How Lightning would work without HTLCs
In a world in which HTLCs didn't exist, Lightning channels would consist only of balances. Each commitment transaction would have two outputs: one for peer
A
, the other for peerB
, according to the current state of the channel.When a payment was being attempted to go through the channel, peers would just trust each other to update the state when necessary. For example:
- Channel
AB
's balances areA[10:10]B
(in sats); A
sends a 3sat payment throughB
toC
;A
asksB
to route the payment. ChannelAB
doesn't change at all;B
sends the payment toC
,C
accepts it;- Channel
BC
changes fromB[20:5]C
toB[17:8]C
; B
notifiesA
the payment was successful,A
acknowledges that;- Channel
AB
changes fromA[10:10]B
toA[7:13]B
.
This in the case of a success, everything is fine, no glitches, no dishonesty.
But notice that
A
could have refused to acknowledge that the payment went through, either because of a bug, or because it went offline forever, or because it is malicious. Then the channelAB
would stay asA[10:10]B
andB
would have lost 3 satoshis.How Lightning would work with HTLCs
HTLCs are introduced to remedy that situation. Now instead of commitment transactions having always only two outputs, one to each peer, now they can have HTLC outputs too. These HTLC outputs could go to either side dependending on the circumstance.
Specifically, the peer that is sending the payment can redeem the HTLC after a number of blocks have passed. The peer that is receiving the payment can redeem the HTLC if they are able to provide the preimage to the hash specified in the HTLC.
Now the flow is something like this:
- Channel
AB
's balances areA[10:10]B
; A
sends a 3sat payment throughB
toC
:A
asksB
to route the payment. Their channel changes toA[7:3:10]B
(the middle number is the HTLC).B
offers a payment toC
. Their channel changes fromB[20:5]C
toB[17:3:5]C
.C
tellsB
the preimage for that HTLC. Their channel changes fromB[17:3:5]C
toB[17:8]C
.B
tellsA
the preimage for that HTLC. Their channel changes fromA[7:3:10]B
toA[7:13]B
.
Now if
A
wants to trickB
and stop respondingB
doesn't lose money, becauseB
knows the preimage,B
just needs to publish the commitment transactionA[7:3:10]B
, which gives him 10sat and then redeem the HTLC using the preimage he got fromC
, which gives him 3 sats more.B
is fine now.In the same way, if
B
stops responding for any reason,A
won't lose the money it put in that HTLC, it can publish the commitment transaction, get 7 back, then redeem the HTLC after the certain number of blocks have passed and get the other 3 sats back.How Lightning doesn't really work
The example above about how the HTLCs work is very elegant but has a fatal flaw on it: transaction fees. Each new HTLC added increases the size of the commitment transaction and it requires yet another transaction to be redeemed. If we consider fees of 10000 satoshis that means any HTLC below that is as if it didn't existed because we can't ever redeem it anyway. In fact the Lightning protocol explicitly dictates that if HTLC output amounts are below the fee necessary to redeem them they shouldn't be created.
What happens in these cases then? Nothing, the amounts that should be in HTLCs are moved to the commitment transaction miner fee instead.
So considering a transaction fee of 10000sat for these HTLCs if one is sending Lightning payments below 10000sat that means they operate according to the unsafe protocol described in the first section above.
It is actually worse, because consider what happens in the case a channel in the middle of a route has a glitch or one of the peers is unresponsive. The other node, thinking they are operating in the trustless protocol, will proceed to publish the commitment transaction, i.e. close the channel, so they can redeem the HTLC -- only then they find out they are actually in the unsafe protocol realm and there is no HTLC to be redeemed at all and they lose not only the money, but also the channel (which costed a lot of money to open and close, in overall transaction fees).
One of the biggest features of the trustless protocol are the payment proofs. Every payment is identified by a hash and whenever the payee releases the preimage relative to that hash that means the payment was complete. The incentives are in place so all nodes in the path pass the preimage back until it reaches the payer, which can then use it as the proof he has sent the payment and the payee has received it. This feature is also lost in the unsafe protocol: if a glitch happens or someone goes offline on the preimage's way back then there is no way the preimage will reach the payer because no HTLCs are published and redeemed on the chain. The payee may have received the money but the payer will not know -- but the payee will lose the money sent anyway.
The end of HTLCs
So considering the points above you may be sad because in some cases Lightning doesn't use these magic HTLCs that give meaning to it all. But the fact is that no matter what anyone thinks, HTLCs are destined to be used less and less as time passes.
The fact that over time Bitcoin transaction fees tend to rise, and also the fact that multipart payment (MPP) are increasedly being used on Lightning for good, we can expect that soon no HTLC will ever be big enough to be actually worth redeeming and we will be at a point in which not a single HTLC is real and they're all fake.
Another thing to note is that the current unsafe protocol kicks out whenever the HTLC amount is below the Bitcoin transaction fee would be to redeem it, but this is not a reasonable algorithm. It is not reasonable to lose a channel and then pay 10000sat in fees to redeem a 10001sat HTLC. At which point does it become reasonable to do it? Probably in an amount many times above that, so it would be reasonable to even increase the threshold above which real HTLCs are made -- thus making their existence more and more rare.
These are good things, because we don't actually need HTLCs to make a functional Lightning Network.
We must embrace the unsafe protocol and make it better
So the unsafe protocol is not necessarily very bad, but the way it is being done now is, because it suffers from two big problems:
- Channels are lost all the time for no reason;
- No guarantees of the proof-of-payment ever reaching the payer exist.
The first problem we fix by just stopping the current practice of closing channels when there are no real HTLCs in them.
That, however, creates a new problem -- or actually it exarcebates the second: now that we're not closing channels, what do we do with the expired payments in them? These payments should have either been canceled or fulfilled before some block x, now we're in block x+1, our peer has returned from its offline period and one of us will have to lose the money from that payment.
That's fine because it's only 3sat and it's better to just lose 3sat than to lose both the 3sat and the channel anyway, so either one would be happy to eat the loss. Maybe we'll even split it 50/50! No, that doesn't work, because it creates an attack vector with peers becoming unresponsive on purpose on one side of the route and actually failing/fulfilling the payment on the other side and making a profit with that.
So we actually need to know who is to blame on these payments, even if we are not going to act on that imediatelly: we need some kind of arbiter that both peers can trust, such that if one peer is trying to send the preimage or the cancellation to the other and the other is unresponsive, when the unresponsive peer comes back, the arbiter can tell them they are to blame, so they can willfully eat the loss and the channel can continue. Both peers are happy this way.
If the unresponsive peer doesn't accept what the arbiter says then the peer that was operating correctly can assume the unresponsive peer is malicious and close the channel, and then blacklist it and never again open a channel with a peer they know is malicious.
Again, the differences between this scheme and the current Lightning Network are that:
a. In the current Lightning we always close channels, in this scheme we only close channels in case someone is malicious or in other worst case scenarios (the arbiter is unresponsive, for example). b. In the current Lightning we close the channels without having any clue on who is to blame for that, then we just proceed to reopen a channel with that same peer even in the case they were actively trying to harm us before.
What is missing? An arbiter.
The Bitcoin blockchain is the ideal arbiter, it works in the best possible way if we follow the trustless protocol, but as we've seen we can't use the Bitcoin blockchain because it is expensive.
Therefore we need a new arbiter. That is the hard part, but not unsolvable. Notice that we don't need an absolutely perfect arbiter, anything is better than nothing, really, even an unreliable arbiter that is offline half of the day is better than what we have today, or an arbiter that lies, an arbiter that charges some satoshis for each resolution, anything.
Here are some suggestions:
- random nodes from the network selected by an algorithm that both peers agree to, so they can't cheat by selecting themselves. The only thing these nodes have to do is to store data from one peer, try to retransmit it to the other peer and record the results for some time.
- a set of nodes preselected by the two peers when the channel is being opened -- same as above, but with more handpicked-trust involved.
- some third-party cloud storage or notification provider with guarantees of having open data in it and some public log-keeping, like Twitter, GitHub or a Nostr relay;
- peers that get paid to do the job, selected by the fact that they own some token (I know this is stepping too close to the shitcoin territory, but could be an idea) issued in a Spacechain;
- a Spacechain itself, serving only as the storage for a bunch of
OP_RETURN
s that are published and tracked by these Lightning peers whenever there is an issue (this looks wrong, but could work).
Key points
- Lightning with HTLC-based routing was a cool idea, but it wasn't ever really feasible.
- HTLCs are going to be abandoned and that's the natural course of things.
- It is actually good that HTLCs are being abandoned, but
- We must change the protocol to account for the existence of fake HTLCs and thus make the bulk of the Lightning Network usage viable again.
See also
- Channel
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Conceit
IPFS is trying to do many things. The IPFS leaders are revolutionaries who think they're smarter than the rest of the entire industry.
The fact that they've first proposed a protocol for peer-to-peer distribution of immutable, content-addressed objects, then later tried to fix that same problem using their own half-baked solution (IPNS) is one example.
Other examples are their odd appeal to decentralization in a very non-specific way, their excessive flirtation with Ethereum and their never-to-be-finished can-never-work-as-advertised Filecoin project.
They could have focused on just making the infrastructure for distribution of objects through hashes (not saying this would actually be a good idea, but it had some potential) over a peer-to-peer network, but in trying to reinvent the entire internet they screwed everything up.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Veterano não é dono de bixete
"VETERANO NÃO É DONO DE BIXETE". A frase em letras garrafais chama a atenção dos transeuntes neófitos. Paira sobre um cartaz amarelo que lista várias reclamações contra os "trotes machistas", que, na opinião do responsável pelo cartaz, "não é brincadeira, é opressão".
Eis aí um bizarro exemplo de como são as coisas: primeiro todos os universitários aprovam a idéia do trote, apoiam sua realização e até mesmo desejam sofrer o trote -- com a condição de o poderem aplicar eles mesmos depois --, louvam as maravilhas do mundo universitário, onde a suprema sabedoria se esconde atrás de rituais iniciáticos fora do alcance da imaginação do homem comum e rude, do pobre e do filhinho-de-papai das faculdades privadas; em suma: fomentam os mais baixos, os mais animalescos instintos, a crueldade primordial, destroem em si mesmos e nos colegas quaisquer valores civilizatórios que tivessem sobrado ali, ficando todos indistingüíveis de macacos agressivos e tarados.
Depois vêm aí com um cartaz protestar contra os assédios -- que sem dúvida acontecem em larguíssima escala -- sofridos pelas calouras de 17 anos e que, sendo também novatas no mundo universitário, ainda conservam um pouco de discernimento e pudor.
A incompreensão do fenômeno, porém, é tão grande, que os trotes não são identificados como um problema mental, uma doença que deve ser tratada e eliminada, mas como um sintoma da opressão machista dos homens às mulheres, um produto desta civilização paternalista que, desde que Deus é chamado "o Pai" e não "a Mãe", corrompe a benéfica, pura e angélica natureza do homem primitivo e o torna esta tão torpe criatura.
Na opinião dos autores desse cartaz é preciso, pois, continuar a destruir o que resta da cultura ocidental, e então esperar que haja trotes menos opressores.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28There's a problem with using Git concepts for everything
We've been seeing a surge in applications that use Git to store other things than code, or that are based on Git concepts and so enable "forking, merging and distributed collaboration" for things like blogs, recipes, literature, music composition, normal files in a filesystem, databases.
The problem with all this is they will either:
- assume the user will commit manually and expect that commit to be composed by a set of meaningful changes, and the commiter will also add a message to the commit, describing that set of meaningful, related changes; or
- try to make the committing process automatic and hide it from the user, so will producing meaningless commits, based on random changes in many different files (it's not "files" if we are talking about a recipe or rows in a table, but let's say "files" for the sake of clarity) that will probably not be related and not reduceable to a meaningful commit message, or maybe the commit will contain only the changes to a single file, and its commit message would be equivalent to "updated
<name of the file>
".
Programmers, when using Git, think in Git, i.e., they work with version control in their minds. They try hard to commit together only sets of meaningful and related changes, even when they happen to make unrelated changes in the meantime, and that's why there are commands like
git add -p
and many others.Normal people, to whom many of these git-based tools are intended to (and even programmers when out of their code-world), are much less prone to think in Git, and that's why another kind of abstraction for fork-merge-collaborate in non-code environments must be used.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Custom spreadsheets
The idea was to use it to make an app that would serve as custom database for everything and interact with the spreadsheet so people could play and calculate with their values after they were created by the custom app, something like an MS Access integrated with Excel?
My first attempt that worked (I believe there was an attempt before but I have probably deleted it from everywhere) was this
react-microspreadsheet
thing (at the time calledreact-spreadsheet
before I donated the npm name to someone who asked):This was a very good spreadsheet component that did many things current "react spreadsheet" components out there don't do. It had formulas; support for that handle thing that you pulled with the mouse and it autofilled cells with a pattern; it had keyboard navigation with Ctrl, Shift, Ctrl+Shift; it had that thing through which you copy-pasted formulas and they would change their parameters depending on where you pasted them (implemented in a very poor manner because I was using and thinking about Excel in baby mode at the time).
Then I tried to make it into "a small sheet you can share" kind of app through assemblymade.com, and eventually as I tried to add more things bugs began to appear.
Then there was
cycle6-spreadsheet
:If I remember well this was very similar to the other one, although made almost 2 years after. Despite having the same initial goal of the other (the multi-app custom database thing) it only yielded:
- Sidesheet, a Chrome extension that opened a spreadsheet on the side of the screen that you could use to make calculations and so on. It worked, but had too many bugs that probably caused me to give up entirely.
I'm not sure which of the two spreadsheets above powers http://sheets.alhur.es.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28notes on "Economic Action Beyond the Extent of the Market", Per Bylund
Source: https://www.youtube.com/watch?v=7St6pCipCB0
Markets work by dividing labour, but that's not as easy as it seems in the Adam Smith's example of a pin factory, because
- a pin factory is not a market, so there is some guidance and orientation, some sort of central planning, inside there that a market doesn't have;
- it is not clear how exactly the production process will be divided, it is not obvious as in "you cut the thread, I plug the head".
Dividing the labour may produce efficiency, but it also makes each independent worker in the process more fragile, as they become dependent on the others.
This is partially solved by having a lot of different workers, so you do not depend on only one.
If you have many, however, they must agree on where one part of the production process starts and where it ends, otherwise one's outputs will not necessarily coincide with other's inputs, and everything is more-or-less broken.
That means some level of standardization is needed. And indeed the market has constant incentives to standardization.
The statist economist discourse about standardization is that only when the government comes with a law that creates some sort of standardization then economic development can flourish, but in fact the market creates standardization all the time. Some examples of standardization include:
- programming languages, operating systems, internet protocols, CPU architectures;
- plates, forks, knifes, glasses, tables, chairs, beds, mattresses, bathrooms;
- building with concrete, brick and mortar;
- money;
- musical instruments;
- light bulbs;
- CD, DVD, VHS formats and others alike;
- services that go into every production process, like lunch services, restaurants, bakeries, cleaning services, security services, secretaries, attendants, porters;
- multipurpose steel bars;
- practically any tool that normal people use and require a little experience to get going, like a drilling machine or a sanding machine; etc.
Of course it is not that you find standardization in all places. Specially when the market is smaller or new, standardization may have not arrived.
There remains the truth, however, that division of labour has the potential of doing good.
More than that: every time there are more than one worker doing the same job in the same place of a division of labour chain, there's incentive to create a new subdivision of labour.
From the fact that there are at least more than one person doing the same job as another in our society we must conclude that someone must come up with an insight about an efficient way to divide the labour between these workers (and probably actually implement it), that hasn't happened for all kinds of jobs.
But to come up with division of labour outside of a factory, some market actors must come up with a way of dividing the labour, actually, determining where will one labour stop and other start (and that almost always needs some adjustments and in fact extra labour to hit the tips), and also these actors must bear the uncertainty and fragility that division of labour brings when there are not a lot of different workers and standardization and all that.
In fact, when an entrepreneur comes with a radical new service to the market, a service that does not fit in the current standard of division of labour, he must explain to his potential buyers what is the service and how the buyer can benefit from it and what he will have to do to adapt its current production process to bear with that new service. That's has happened not long ago with
- services that take food orders from the internet and relay these to the restaurants;
- hostels for cheap accommodation for young travellers;
- Uber, Airbnb, services that take orders and bring homemade food from homes to consumers and similars;
- all kinds of software-as-a-service;
- electronic monitoring service for power generators;
- mining planning and mining planning software; and many other industry-specific services.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28P2P reputation thing
Each node shares a blob of the reputations they have, which includes a confidence number. The number comes from the fact that reputations are inherited from other nodes they trust and averaged by their confidence in these. Everything is mixed for plausible deniability. By default a node only shares their stuff with people they manually add, to prevent government from crawling everybody's database. Also to each added friend nodes share a different identity/pubkey (like giving a new Bitcoin address for every transaction) (derived from hip32) (and since each identity can only be contacted by one other entity the node filters incoming connections to download their database: "this identity already been used? no, yes, used with which peer?").
Network protocol
Maybe the data uploader/offerer initiates connection to the receiver over Tor so there's only a Tor address for incoming data, never an address for a data source, i.e. everybody has an address, but only for requesting data.
How to request? Post an encrypted message in an IRC room or something similar (better if messages are stored for a while) targeted to the node/identity you want to download from, along with your Tor address. Once the node sees that it checks if you can download and contacts you.
The encrypted messages could have the target identity pubkey prefix such that the receiving node could try to decrypt only some if those with some probability of success.
Nodes can choose to share with anyone, share only with pre-approved people, share only with people who know one of their addresses/entities (works like a PIN, you give the address to someone in the street, that person can reach you, to the next person you give another address etc., you can even have a public address and share limited data with that).
Data model
Each entry in a database should be in the following format:
internal_id : real_world_identifier [, real_world_identifier...] : tag
Which means you can either associate one or multiple real world identifier with an internal id and associate the real person designated by these identifiers with a tag. the tag should be part of the standard or maybe negotiated between peers. it can be things like
scammer
,thief
,tax collector
etc., orhonest
,good dentist
etc. defining good enough labels may be tricky.internal_id
should be created by the user who made the record about the person.At first this is not necessary, but additional bloat can be added to the protocol if the federated automated message posting boards are working in the sense that each user can ask for more information about a given id and the author of that record can contact the person asking for information and deliver free text to them with the given information. For this to work the internal id must be a public key and the information delivered must be signed with the correspondent private key, so the receiver of the information will know it's not just some spammer inventing stuff, but actually the person who originated that record.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28gravity
IPFS is nice as a personal archiving tool (edit: it's not). You store a bunch of data and make it available to the public.
The problem is that no one will ever know you have that data, therefore you need a place to publish it somewhere. Gravity was an attempt of being the tool for this job.
It was a website that showcased the collections from users, and it was also a command-line client that used your IPFS keys for authentication and allowed you to paste IPFS URIs and names and descriptions.
The site was intended to be easy to run so you could have multiple stellar bodies aggregating content and interact with them all in a standardized manner.
It also had an ActivityPub/"fediverse" integration so people could follow Gravity server users from Mastodon and friends and see new data they published as "tweets".
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28TiddlyWiki remoteStorage
TiddlyWiki is very good and useful, but since at this time I used multiple computers during the week, it wouldn't work for me to use it as a single file on my computer, so I had to hack its internal tiddler saving mechanism to instead save the raw data of each tiddler to remoteStorage and load them from that place also (ok, there was in theory a plugin system, but I had to read and understand the entire unformatted core source-code anyway).
There was also a server that fetched tiddlywikis from anyone's remoteStorage buckets (after authorization) and served these to the world, a quick and nice way to publish a TiddlyWiki -- which is a problem all people in TiddlyWiki struggle against.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Democracia na América
Alexis de Tocqueville escreveu um livro só elogiando o sistema político dos Estados Unidos. E mesmo tendo sido assim, e mesmo tendo escrito o seu livro quase 100 anos antes do mais precoce sinal de decadência da democracia na América, percebeu coisas que até hoje quase ninguém percebe: o mandato da suprema corte é um enorme poder, uma força centralizadora, imune ao voto popular e com poderes altamente indefinidos e por isso mesmo ilimitados.
Não sei se ele concluiu, porém, que não existe nem pode existir balanço perfeito entre poderes. Sempre haverá furos.
De qualquer maneira, o homem é um gênio apenas por ter percebido isso e outras coisas, como o fato da figura do presidente, também obviamente um elemento centralizador, não ser tão poderosa quanto a figura de um rei da França, por exemplo. Mas ao mesmo tempo, por entre o véu de elogios (sempre muito sóbrios) deixou escapar que provavelmente também achava que não poderia durar para sempre a fraqueza do cargo de presidente.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28busca múltipla na estante virtual
A single-page app made in Elm with a Go backend that scrapped estantevirtual.com.br in real-time for search results of multiple different search terms and aggregated the results per book store, so when you want to buy many books you can find the stores that have the biggest part of what you want and buy everything together, paying less for the delivery fee.
It had a very weird unicode issue I never managed to solve, something with the encoding estantevirtual.com.br used.
I also planned to build the entire checkout flow directly in this UI, but then decided it wasn't worth it. The search flow only was already good enough.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A crappy zk-rollups explanation attempt
(Considering the example of zksync.io) (Also, don't believe me on any of this.)
- They are sidechains.
- You move tokens to the sidechain by depositing it on an Ethereum contract. Then your account is credited in the sidechain balance.
- Then you can make payments inside the sidechain by signing transactions and sending them to a central operator.
- The central operator takes transactions from a bunch of people, computes the new sidechain balances state and publishes a hash of that state to the Ethereum contract.
- The idea is that a single transaction in the blockchain contains a bunch of sidechain transactions.
- The operator also sends to the contract an abbreviated list of the sidechain transactions. The trick is making all signatures condensed in a single zero-knowledge proof which is enough for the contract to verify that the transition from the previous state to the new is good.
- Apparently they can fit 500 sidechain transactions in one mainchain transaction (each is 12 bytes). So I believe it's fair to say all this zk-rollup fancyness could be translated into "a system for aggregating transactions".
-
I don't understand how the zero-knowledge proof works, but in this case it is a SNARK and requires a trusted setup, which I imagine is similar to this one.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28requesthub.xyz
An app that was supposed to be some kind of declarative connector between two services, one that sent webhooks and the other that accepted HTTP requests of any kind. You would proxy and transform the webhooks using RequestHub and create a new request to the other service using that data.
The transformations were declared in the almighty
jq
language.It worked and had other functions planned for the future, but I guess it was too arcane, even I was confused by it sometimes.
Also it was very prone to spam (involuntary) attacks like some that did happen. Maybe it would work better in a world of anonymous satoshi payments.
Later I tried to revive it as a Trello Power-Up that would create comments on cards automatically according to some transformation rules and webhooks received.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Rede Relâmpago
Ao se referir à Lightning Network do O que é Bitcoin?, nós, brasileiros e portugueses, devemos usar o termo "Relâmpago" ou "Rede Relâmpago". "Relâmpago" é uma palavra bonita e apropriada, e fácil de pronunciar por todos os nossos compatriotas. Chega de anglicismos desnecessários.
Exemplo de uma conversa hipotética no Brasil usando esta nomenclatura:
– Posso pagar com Relâmpago? – Opa, claro! Vou gerar um boleto aqui pra você.
Repare que é bem mais natural e fácil do que a outra alternativa:
– Posso pagar com láitenim? – Leite ninho?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Eltoo
Read the paper, it's actually nice and small. You can read only everything up to section 4.2 and it will be enough. Done.
Ok, you don't want to. Or you tried but still want to read here.
Eltoo is a way of keeping payment channel state that works better than the original scheme used in Lightning. Since Lightning is a bunch of different protocols glued together, it can It replace just the part the previously dealed with keeping the payment channel.
Eltoo works like this: A and B want a payment channel, so they create a multisig transaction with deposits from both -- or from just one, doesn't matter. That transaction is only spendable if both cooperate. So if one of them is unresponsive or non-cooperative the other must have a way to get his funds back, so they also create an update transaction but don't publish it to the blockchain. That update transaction spends to a settlement transaction that then distributes the money back to A and B as their balances say.
If they are cooperative they can change the balances of the channel by just creating new update transactions and settlement transactions and number them like 1, 2, 3, 4 etc.
Solid arrows means a transaction is presigned to spend only that previous other transaction; dotted arrows mean it's a floating transaction that can spend any of the previous.
Why do they need and update and a settlement transaction?
Because if B publishes update2 (in which his balances were greater) A needs some time to publish update4 (the latest, which holds correct state of balances).
Each update transaction can be spent by any newer update transaction immediately or by its own specific settlement transaction only after some time -- or some blocks.
Hopefully you got that.
How do they close the channel?
If they're cooperative they can just agree to spend the funding transaction, that first multisig transaction I mentioned, to whatever destinations they want. If one party isn't cooperating the other can just publish the latest update transaction, wait a while, then publish its settlement transaction.
How is this better than the previous way of keeping channel states?
Eltoo is better because nodes only have to keep the last set of update and settlement transactions. Before they had to keep all intermediate state updates.
If it is so better why didn't they do it first?
Because they didn't have the idea. And also because they needed an update to the Bitcoin protocol that allowed the presigned update transactions to spend any of the previous update transactions. This protocol update is called
SIGHASH_NOINPUT
[^anyprevout], you've seen this name out there. By marking a transaction withSIGHASH_NOINPUT
it enters a mystical state and becomes a floating transaction that can be bound to any other transaction as long as its unlocking script matches the locking script.Why can't update2 bind itself to update4 and spend that?
Good question. It can. But then it can't anymore, because Eltoo uses
OP_CHECKLOCKTIMEVERIFY
to ensure that doesn't actually check not a locktime, but a sequence. It's all arcane stuff.And then Eltoo update transactions are numbered and their lock/unlock scripts will only match if a transaction is being spent by another one that's greater than it.
Do Eltoo channels expire?
No.
What is that "on-chain protocol" they talk about in the paper?
That's just an example to guide you through how the off-chain protocol works. Read carefully or don't read it at all. The off-chain mechanics is different from the on-chain mechanics. Repeating: the on-chain protocol is useless in the real world, it's just a didactic tool.
[^anyprevout]: Later
SIGHASH_NOINPUT
was modified to fit better with Taproot and Schnorr signatures and renamed toSIGHASH_ANYPREVOUT
. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Truthcoin as a spacechain
To be clear, the term "spacechain" here refers only to the general concept of blindly merge-mined (BMM) chains without a native money-token, not including the "spacecoins".
The basic idea is that for Truthcoin/Hivemind to work we need
- Balances of Votecoin tokens, i.e. a way to keep track of who owns how much of the oracle corporation;
- Bitcoin tokens to be used for buying and selling prediction market shares, i.e. money to gamble;
- A blockchain, i.e. some timestamping service that emits blocks ordered with transactions and can keep track of internal state and change the state -- including the balances of the Votecoin tokens and of the Bitcoin tokens that are assigned to individual prediction markets according to predefined rules;
A spacechain, i.e. a blindly merge-mined chain, gives us 1 and 3. We can just write any logic for that and that should be very easy. It doesn't give us 2, and it also has the problem of how the spacechain users can pay the spacechain miners (which is why the spacecoins were envisioned in the first place, but we don't have spacecoins here).
But remember we have votecoins already. Votecoins (VTC) should represent a share in the oracle corporation, which means they entitle their holders to some revenue -- even though they also burden their holders with the duty to vote in event outcomes (at the risk of losing part of their own votecoin balance) --, and they can be exchanged, so we can assume they will have some value.
So we could in theory use these valuable tokens to pay the spacechain miners. That wouldn't be great because it pervert their original purpose and wouldn't solve the problem 2 from above -- unless we also used the votecoins to bet in which case they wouldn't be just another shitcoin in the planet with no network effect competing against Bitcoin and would just cause harm to humanity.
What we can do instead is to create a native mechanism for issuing virtual Bitcoin tokens (vBTC) in this chain, collaterized by votecoins, then we can use these vBTC to both gamble (solve problem 2) and pay miners (fix the hole in the spacechain BMM design).
For example, considering the VTC to be worth 0.001 BTC, any VTC holder could put 0.005 VTC and get 0.001 vBTC, then use to gamble or sell to others who want to gamble. The VTC holder still technically owns the VTC and can and must still participate in the oracle decisions. They just have to pay the BTC back before they can claim their VTC back if they want to send it elsewhere.
They stand to gain by selling vBTC if there is a premium for vBTC over BTC (i.e. people want to gamble) and then rebuying vBTC back once that premium goes away or reverts itself.
For this scheme to work the chain must know the exchange rate between VTC and BTC, which can be provided by the oracle corporation itself.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Link sharing incentivized by satoshis
See https://2key.io/ and https://www.youtube.com/watch?v=CEwRv7qw4fY&t=192s.
I think the general idea is to make a self-serving automatic referral program for individual links, but I wasn't patient enough to deeply understand neither of the above ideas.
Solving fraud is an issue. People can fake clicks.
One possible solution is to track conversions instead of clicks, but then it's too complex as the receiving side must do stuff and be trusted to do it correctly.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Haskell Monoids
You've seen that
<>
syntax and noticed it is imported fromData.Monoid
?I've always thought
<>
was a pretty complex mathematical function and it was very odd that people were using it forText
values, like"whatever " <> textValue <> " end."
.It turns out
Text
is a Monoid. That means it implements the Monoid class (or typeclass), that means it has a particular way of being concatenated. Any list could be a Monoid, any abstraction you can think of for which it makes sense to concatenate could be a Monoid, and it would use the same<>
syntax. What exactly<>
would do with that value when concatenating depends on its typeclass implementation of Monoid.We can assume, for example, that
Text
implements Monoid by just joining the text bytes, and now we can use<>
without getting puzzled about it. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O mito do objetivo
O insight deste cara segundo o qual buscar objetivos fixos, além de matar a criatividade, ainda não consegue atingir o tal objetivo -- que é uma coisa na qual eu sempre acreditei, embora sem muitas confirmações e (talvez por isso) sem dizê-lo abertamente --, combina com a idéia geral de que todas as estruturas sociais que valem alguma coisa surgem do jogo e brincadeira.
A seriedade, que é o oposto da brincadeira, é representada aqui pelo objetivo. Pessoas muito sérias com um planejamento e um objetivo final, tudo esquematizado.
Na verdade esse insight é bem manjado. Até eu mesmo já o tinha mencionado, citando Taleb em Processos Antifrágeis.
E finalmente há esta tirinha que eu achei aleatoriamente e que bem o representa:
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Soft-forks on Bitcoin
A traditional soft-fork activation plays out like this:
- someone makes a proposal
- if half-dozen respected Core developers like that, they implement it and talk about it
- everybody loves the idea
- they ship it in Bitcoin Core
- miners turn it onA traditional soft-fork activation plays out like this:
A traditional soft-fork failure plays out like this:
- someone makes a proposal
- if half-dozen respected Core developers do not care much about the idea, they don't do anything
- people fight on Twitter about the merits of the idea forever
A sidechain activation within BIP-300 plays out like this:
- someone writes the sidechain software
- if a bunch of people are interested in that, they start playing with it in test mode
- if it is really good people launch a proposal to miners
- miners vote yes or no
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O Bitcoin como um sistema social humano
Afinal de contas, o que é o Bitcoin? Não vou responder a essa pergunta explicando o que é uma "blockchain" ou coisa que o valha, como todos fazem muito pessimamente. A melhor explicação em português que eu já vi está aqui, mas mesmo assim qualquer explicação jamais será definitiva.
A explicação apenas do protocolo, do que faz um programa
bitcoind
sendo executado em um computador e como ele se comunica com outros em outros computadores, e os incentivos que estão em jogo para garantir com razoável probabilidade que se chegará a um consenso sobre quem é dono de qual parte de qual transação, apesar de não ser complicada demais, exigirá do iniciante que seja compreendida muitas vezes antes que ele se possa se sentir confortável para dizer que entende um pouco.E essa parte técnica, apesar de ter sido o insight fundamental que gerou o evento miraculoso chamado Bitcoin, não é a parte mais importante, hoje. Se fosse, várias dessas outras moedas seriam concorrentes do Bitcoin, mas não são, e jamais poderão ser, porque elas não estão nem próximas de ter os outros elementos que compõem o Bitcoin. São eles:
- A estrutura
O Bitcoin é um sistema composto de partes independentes.
Existem programadores que trabalham no protocolo e aplicações, e dia após dia novos programadores chegam e outros saem, e eles trabalham às vezes em conjunto, às vezes sem que um se dê conta do outro, às vezes por conta própria, às vezes pagos por empresas interessadas.
Existem os usuários que realizam validação completa, isto é, estão rodando algum programa do Bitcoin e contribuindo para a difusão dos blocos, das transações, rejeitando usuários malignos e evitando ataques de mineradores mal-intencionados.
Existem os poupadores, acumuladores ou os proprietários de bitcoins, que conhecem as possibilidades que o mundo reserva para o Bitcoin, esperam o dia em que o padrão-Bitcoin será uma realidade mundial e por isso mesmo atributem aos seus bitcoins valores muito mais altos do que os preços atuais de mercado, agarrando-se a eles.
Especuladores de "criptomoedas" não fazem parte desse sistema, nem tampouco empresas que aceitam pagamento em bitcoins para imediatamente venderem tudo em troca de dinheiro estatal, e menos ainda gente que usa bitcoins e a própria marca Bitcoin para aplicar seus golpes e coisas parecidas.
- A cultura
Mencionei que há empresas que pagam programadores para trabalharem no código aberto do BitcoinCore ou de outros programas relacionados à rede Bitcoin -- ou mesmo em aplicações não necessariamente ligadas à camada fundamental do protocolo. Nenhuma dessas empresas interessadas, porém, controla o Bitcoin, e isso é o elemento principal da cultura do Bitcoin.
O propósito do Bitcoin sempre foi ser uma rede aberta, sem chefes, sem política envolvida, sem necessidade de pedir autorização para participar. O fato do próprio Satoshi Nakamoto ter voluntariamente desaparecido das discussões foi fundamental para que o Bitcoin não fosse visto como um sistema dependente dele ou que ele fosse entendido como o chefe. Em outras "criptomoedas" nada disso aconteceu. O chefe supremo do Ethereum continua por aí mandando e desmandando e inventando novos elementos para o protocolo que são automaticamente aceitos por toda a comunidade, o mesmo vale para o Zcash, EOS, Ripple, Litecoin e até mesmo para o Bitcoin Cash. Pior ainda: Satoshi Nakamoto saiu sem nenhum dinheiro, nunca mexeu nos milhares de bitcoins que ele gerou nos primeiros blocos -- enquanto os líderes dessas porcarias supramencionadas cobraram uma fortuna pelo direito de uso dos seus primeiros usuários ou estão aí a até hoje receber dividendos.
Tudo isso e mais outras coisas -- a mentalidade anti-estatal e entusiasta de sistemas p2p abertos dos membros mais proeminentes da comunidade, por exemplo -- faz com que um ar de liberdade e suspeito de tentativas de centralização da moeda sejam percebidos e execrados.
- A história
A noção de que o Bitcoin não pode ser controlado por ninguém passou em 2017 por dois testes e saiu deles muito reforçada: o primeiro foi a divisão entre Bitcoin (BTC) e Bitcoin Cash (BCH), uma obra de engenharia social que teve um sucesso mediano em roubar parte da marca e dos usuários do verdadeiro Bitcoin e depois a tentativa de tomada por completo do Bitcoin promovida por mais ou menos as mesmas partes interessadas chamada SegWit2x, que fracassou por completo, mas não sem antes atrapalhar e difundir mentiras para todos os lados. Esses dois fracassos provaram que o Bitcoin, mesmo sendo uma comunidade desorganizada, sem líderes claros, está imune à captura por grupos interessados, o que é mais um milagre -- ou, como dizem, um ponto de Schelling.
Esse período crucial na história do Bitcoin fez com ficasse claro que hard-forks são essencialmente incompatíveis com a natureza do protocolo, de modo que no futuro não haverá a possibilidade de uma sugestão como a de imprimir mais bitcoins do que o que estava programado sejam levadas a sério (mas, claro, sempre há a possibilidade da cultura toda se perder, as pessoas esquecerem a história e o Bitcoin ser cooptado, eis a importância da auto-educação e da difusão desses princípios).
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Liberalismo oitocentista
Quando comecei a ler sobre "liberalismo" na internet havia sempre umas listas de livros recomendados, uns Ludwig von Mises, Milton Friedman e Alexis de Tocqueville. "A Democracia na América". Pra mim parecia estranho aquele papo de democracia quando eu estava interessado era em como funcionaria um mercado livre, sem regulações e tal.
Parece que Tocqueville era uma herança do mesmo povo que adorava a expressão "liberalismo clássico". O liberalismo clássico era uma coisa política que ia contra a monarquia e em favor da democracia, e aí Tocqueville se encaixava muito bem.
Poucos anos se passaram e tudo mudou. Agora acho que alguém lendo na internet não vai ver menção nenhuma a Tocqueville ou liberalismo clássico, essa chatice de democracia e suas chatices legalistas. O "libertarianismo", também um nome infeliz, tomou conta de tudo, e cresceu muito mais do que o movimento liberal-da-internet jamais imaginou que seria possível.
Os libertários brasileiros são anarquistas, detestam a democracia, reconhecem nela um vetor de ataque dos socialistas a qualquer pontinha de livre-mercado que exista -- e às liberdades individuais dos cidadãos (este aqui ainda um ponto em comum com os liberais oitocentistas). São inclusive muito mais propensos a defender a monarquia do que a democracia.
E isso é uma coisa boa. Finalmente uma pessoa pode defender princípios razoáveis de livre-mercado e individualismo sem precisar se associar com o movimento setecentistas e oitocentista que fez coisas boas, mas também foi responsável por coisas horríveis como a revolução francesa e todos os seus absurdos, e de onde saiu todo o movimento socialista.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A chatura Kelsen
Já presenciei várias vezes este mesmo fenômeno: há um grupo de amigos ou proto-amigos conversando alegremente sobre o conservadorismo, o tradicionalismo, o anti-comunismo, o liberalismo econômico, o livre-mercado, a filosofia olavista. É um momento incrível porque para todos ali é sempre tão difícil encontrar alguém com quem conversar sobre esses assuntos.
Eis que um deles fez faculdade de direito. Tendo feito faculdade de direito por acreditar que essa lhe traria algum conhecimento (já que todos os filósofos de antigamente faziam faculdade de direito!) esse sujeito que fez faculdade de direito, ao contrário dos demais, não toma conhecimento de que a sua faculdade é uma nulidade, uma vergonha, uma época da sua vida jogada fora -- e crê que são valiosos os conteúdos que lhe foram transmitidos pelos professores que estão ali para ajudar os alunos a se preparem para o exame da OAB.
Começa a falar de Kelsen. A teoria pura do direito, hermenêutica, filosofia do direito. A conversa desanda. Ninguém sabe o que dizer. A filosofia pura do direito não está errada porque é apenas uma lógica pura, e como tal não pode ser refutada; e por não ter qualquer relação com o mundo não há como puxar um outro assunto a partir dela e sair daquele território. Os jovens filósofos perdem ali as próximas duas horas falando de Kelsen, Kelsen. Uma presença que os ofende, que parece errada, que tem tudo para estar errada, mas está certa. Certa e inútil, ela lhes devora as idéias, que são digeridas pela teoria pura do direito.
É imperativo estabelecer esta regra: só é permitido falar de Kelsen se suas idéias não forem abordadas ou levadas em conta. Apenas elogios ou ofensas serão tolerados: Kelsen era um bom homem; Kelsen era um bobão. Pronto.
Eis aqui um exemplo gravado do fenômeno descrito acima: https://www.youtube.com/watch?v=CKb8Ij5ThvA: o Flavio Morgenstern todo simpático, elogiando o outro, falando coisas interessantes sobre o mundo; e o outro, que devia ser amigo dele antes de entrar para a faculdade de direito, começa a falar de Kelsen, com bastante confiança de que aquilo é relevante, e dá-lhe Kelsen, filosofia do direito, toda essa chatice tremenda.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28tempreites
My first library to get stars on GitHub, was a very stupid templating library that used just HTML and HTML attributes ("DSL-free"). I was inspired by http://microjs.com/ at the time and ended up not using the library. Probably no one ever did.