-
@ fcc4252f:e9ec0d5d
2024-04-17 22:52:57Check out the latest auctions and products below 👀
Plebeian Market sees growth every week, with new merchants setting up stalls and showcasing their products and services. We are grateful to each and every one of you for your ongoing support! Thank you!
Latest Auctions on the Marketplace
MaxisClub - The Halving
Celebrate the next bitcoin epoch with another classic MaxisClub meme!
Isabel Sydow Greeting Cards
Newest Merchants
Depbit - QR Code Kits
Depbit offers an alternative to metal plated backup QR seed. The problem is that metal ones are very expensive and very difficult to make. Generally people don't buy more than one metal backup plate and then it is the only one they keep which isn’t best practices for backing up your seed. With Depbit you can buy 15 plastics QR kits to use and discard them as needed.
Watch Video Here
In principle, everyone prefers the metal option. But after fighting with the hammer and seeing how easy the plastic one is. Everyone who has tried it has loved it, so try it for yourself!
Order Here
Join Us!
Plebeian Market is a commerce platform that supports open trade and communications while helping individuals and merchants transition onto a bitcoin standard.
Let's Build Together
Bekka
-
@ 38f9a05c:6999fc04
2024-04-07 12:08:30In today's world, where self-promotion appears to be the standard, there exists a subtle charm in modesty. As a child, I frequently encountered the expression, "Only a donkey praises their tail." its significance has remained with me throughout my life. It serves as a reminder that authentic excellence does not require shouting from the rooftops; instead, it manifests itself in actions, not words.
Allow me to now introduce Alfred Adler, a pioneer in the field of psychology. Born in Vienna in 1870, Adler's theories challenged the prevailing views of his time, particularly Sigmund Freud's emphasis on the unconscious mind. Adler proposed individual psychology, focusing on the unique experiences and perceptions that shape each person's worldview. Central to his theories was the concept of the "inferiority complex" and its counterpart, the "superiority complex," shedding light on how individuals grapple with feelings of inadequacy and superiority.
Inferiority complex
The "inferiority complex" describes persistent feelings of inadequacy and self-doubt, stemming from early experiences. Individuals afflicted with this complex often seek validation and may engage in compensatory behaviors. Therapy and self-reflection are key to addressing and overcoming these feelings, fostering healthier self-esteem and confidence.
An example of an inferiority complex might be a person who, from a young age, consistently felt overshadowed by their siblings' achievements and talents. Despite their own unique abilities and successes, they internalize a belief that they are inherently inferior to others. This belief could manifest in various ways throughout their life, such as constantly seeking approval from others, feeling anxious or inadequate in social situations, or striving excessively for success in an attempt to prove their worth.
Superiority Complex
Conversely, the "superiority complex" manifests as an exaggerated sense of self-importance and entitlement. Individuals with this complex may exhibit arrogance and lack empathy towards others, struggling with meaningful relationships and criticism.
An example of a superiority complex could be seen in a person who consistently belittles others and insists on being the center of attention in social settings. They might boast about their achievements, talents, or possessions in an attempt to assert their superiority over those around them.
Back to the childhood phrase "Only a donkey brags praises their tail." Returning to the childhood adage "Only a donkey brags praises their tail,” modesty and humility are often misconstrued as weakness or a lack of self-confidence. Nevertheless, it is far from that. It entails possessing a realistic view of oneself and comprehending that one's value should not rely solely on external validation or praise. Instead, it's found in the genuine connections we make with others and the positive impact we have on the world around us.
By abstaining from boasting about ourselves, we provide room for others to shine. It is not about denigrating our accomplishments or pretending to be less than we are. On the contrary, it involves acknowledging our abilities without feeling compelled to advertise them to the public continually. Therein lies the elegance of allowing our actions to speak louder than our words.
Moreover, humility allows for personal growth and learning. When we're humble, we're open to feedback and constructive criticism. Instead of becoming defensive or dismissive, we approach each opportunity for improvement with an open mind and a willingness to learn. This mindset not only helps us develop professionally but also fosters a sense of humility and gratitude for the knowledge and experiences that others bring to the table.
Humility enables personal development and learning. When we're not constantly focused on ourselves, we become more attuned to the needs and experiences of those around us. We listen more intently, offer support more readily, and celebrate the successes of others with genuine enthusiasm. In doing so, we cultivate deeper connections and create a more inclusive and supportive community.
In a society that often glorifies self-promotion and individualism, it can be challenging to embrace humility fully. However, it's a quality worth cultivating, both personally and professionally. By focusing on what we can contribute rather than what we can gain, we create a more harmonious and compassionate world. Hence, should you ever find the urge to trumpet your achievements or magnify your ego, pause to reflect on the timeless wisdom encapsulated in the age-old adage: "Only a donkey praises their tail," alongside the profound insights of the Austrian psychiatrist's psychological framework. Instead of trying to prove yourself with words, show who you are through your actions. Embrace humility, which means being modest and not bragging. True greatness isn't about loudly boasting about your good qualities. It's about having inner strength and making a positive impact on the people around you.
Lastly as the Great Roman emperor Marcus Aurelius said, "Waste no more time arguing about what a good man should be. Be one."
-
@ 3bf0c63f:aefa459d
2024-03-23 08:57:08Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 5df413d4:2add4f5b
2024-04-07 04:11:25Like a Bitcoiner Litany Against Fear, “Bitcoin is trustless” is a mantra soulfully and sincerely recited by acolytes of the technology. And it is true – Bitcoin is trustless. No single entity can control or force changes onto the Bitcoin network. With enough intension, insomuch as you can access to the necessary tools, no intermediary can prevent you from buying, selling, spending, holding, or otherwise using bitcoin, even under the most repressive of circumstances.
Bitcoin is trustless, but bitcoining...bitcoining is all about trust! Across self, family, friends, and community, bitcoining forces trust dynamics to the surface and forms a lived experience of reimagined and rebuilt interpersonal and social trust structures that extend far beyond the timechain.
Developing Trust in Self
https://image.nostr.build/788e81071ac4b904d75b2f22e23f90e9fda61ab42482bcba4840baf04d3e7a34.jpg
Bitcoining is, first and foremost, a practice of radical self-trust. As a bearer money, any and all mistakes resulting in loss of bitcoin rest squarely at the feet of the individual holder and, after the fact, can only be worn like a heavy iron dunce cap by the same. As such, bitcoiners must first conquer their inner doubts and develop an unshakeable internal trust in self. This is the foundation of personal responsibility required to take full control of one’s financial life and to shoulder, without fear or doubt, the sobering awareness that the wealth (and perhaps freedom) of one’s self, family, and future progeny may well hinge entirely on the “rightness” of one’s decisions around bitcoin today.
Rebuilding Trust in Family
https://image.nostr.build/ab2d3d270c842d65d3f434382498d34baea1e30afe8f66e20e7dd682a7054633.jpg
A few months ago, I found myself explaining the nature of bitcoin self-custody for inheritance to a senior private wealth management executive. As the implications became clear to her, in a moment of unfiltered horror, she exclaimed “you mean as I get older, I would have to trust my children with this?” My response, of course, “Who else but your own children should you trust to secure your bitcoin wealth as you age?” Confronted with my rebuttal, she did not have an answer, but I could tell that it was being digested as food for thought.
The sad truth is that the fiat world orchestrates a pervasive and never-ending psyop to estrange us from family – and to thus divorce us from the powerful benefits of intergenerational family economics. Bitcoin fixes this. Bitcoining with a focus on long-term and generational wealth is a strong catalyst for us to reject the unbalanced, scorched-earth consumerist "individualism" that now increasingly pits young against old. With our new bearings as bitcoiners, we realize that it is time to heal generational rifts in our families and that many priceless things are regained from rebuilding lost bi-directional familial ties of economic support and care. Bitcoin wealth being “stacked” today will mean very little unless there are associated immediate and extended family units with strong bonds of shared trust and trust-distributed risk in place to shepherd keys far into the future.
Deepening Trust in Friends
https://image.nostr.build/ff8641bbc8a2b118fe772d8c6f0c25e41e04b17c68519371a1c4658c58d88b5b.jpg
While bitcoin’s monetary network might make us “free” (at least from time-theft), it is parallel human networks that must ultimately make us happy and give our lives meaning. It is best that we understand that happiness, more than anything else, is the opposite of loneliness – and that this is one of the few things that money truly cannot buy. The double-edge sword of bitcoining is that without strong supporting bitcoiner friendships, bitcoin’s promise of extreme future wealth threatens to bring an even more extreme isolation along with it for many. And humans die in insolation...It’s dangerous to go alone, as they say. Who will you be able to turn to when the world sees you as little more than a walking sat symbol?
The preemptive remedy here is seeking out and cultivating meaningful friendships with other bitcoiners and, like Noah before the flood, working to get your most important nocoiner friends on the boat before it’s too late. Bitcoiner friendships form the social layer of bitcoin wealth protection and can provide a broad range of “social insurance” against catastrophe – economic or otherwise. As such, bitcoiner friendships are integral to one’s wider “real-world” bitcoin security model and must not be overlooked. Ask yourself – what’s the point of having nice things if you have no one to share them with?
Leveraging Trust in Pseudonymous Community
https://image.nostr.build/4e44e8c7743e92fcff9696857ac467bf78fc11066a85143608c5c7525e0d4f8e.jpg
Scenario: You arrive in a foreign country. An anon that you “know from the internet" suggests you to reach out to another anon who supposedly lives in city you’re visiting. Upon making contact, you receive GPS coordinates – I repeat, coordinates, not an address – via an encrypted chat set to “burn after reading” mode as an invite to come hang out with a group of local bitcoiners.
Totally normal stuff, right? 😅 Well, this is essentially the situation I stumbled into not long ago whilst traveling in Asia. Ultimately, I felt comfortable joining this meetup because of the nature of the larger bitcoiner community. All of us in the room might have been nyms to each other but we shared mutual friends who could cross-verify us without divulging unnecessary private information, of course. In bitcoin, and a few other very strange communities that I count myself a member of, meetings like these serve a critical function for broader, distributed reputation and trust building within what is otherwise a semi-transient, geographically dispersed, pseudonymous community.
In such situations, good behavior confers all parties with important trust-based social capital. The mutual friend gains reputation as someone who “does not associate with or recommend shitcoiners / bad actors.” The meetup attendees, if they behave, are more likely to be recommended by both the mutual friend and by each other the next time around. Thus, a positive social feedback loop emerges, with bitcoiners going around saying nice things about each other and having those things largely proven to be true in subsequent real world interactions. The implications here are far-reaching – as these positive vibrations flow through bitcoin’s living human terrain, they amplify and accelerate the chance meetings and serendipitous exchange of ideas from which the future household names of bitcoin tools, enterprises, and communities will certainly be born.
Conclusion
For most, certainly myself included, engaging with bitcoin forces a dramatic and comprehensive reshaping of both one’s understanding of and relationship with trust. Ironically, bitcoining, which starts out as an individual endeavor to harness the transformative power of trustless money in one’s life, all but requires the establishment of both internal and social trust models that are more robust and more meaningful, than anything our fiat-minded precoiner selves could have ever imagined. Bitcoin is trustless. But bitcoining is nothing without trust.
-
@ 1689f2c8:5f809f76
2024-04-05 18:37:56test
-
@ 3c984938:2ec11289
2024-04-01 09:36:34A long time ago, a girl resided on a tropical island. The girl's name is Sirena. She resided with her mother in close proximity to the Hagåtña River. Sirena's mother was a bit strict and tried to teach her to follow her in footsteps to be a lady, but Sirena only dreamed of swimming all day.\
\ Sirena's only outlet was when her Godmother would come visit. She always brought surprises, such as new trinkets, stories, and secretly gave her coconut candy.
Sirena's mother was preparing for a special event and needed her to acquire special ingredients from the nearby village. She had a significant amount of preparations to complete, therefore she requested that Sirena procure the necessary ingredients and return promptly. Sirena made a promised to her mother that she would hurry back.\
She was on the village path. She kept her eyes on the trail, trying her best to be a good daughter and hurry back. But she took one glance briefly at the river.
\ She is mesmerized by the water and jumps in before she realizes it. She swims down the river to the ocean, completely forgetting her mother's errand and promise. Sirena does not returned home even after the sun has set and was still swimming. Her mother, frustrated and angry, unleashes a powerful curse. The godmother begged her daughter to calm down. She shouted, "She's swimming again! Look at how late it is! "If you have such a profound love for the ocean, then become a fish," she exclaims to the ocean. Knowing the extent of her daughter's curse. She tries to counteract her daughter's curse. She pleads to the ocean, "Please, let me keep my Goddaughter's heart, please let that much remain."\
\ In the eerie glow of the moonlight, upon realizing her mother's task, she swims back to where the ocean meets the river. But she experiences a strange sensation in her lower half. The water swirls around her. She looks down to see that she has now fins instead of feet. With the new transformation, she regrets not finishing her mother's errand.
Sirena was happy because now she can be in the water all day long. However, Sirena also wished she did what her mother asked and found another way to punish her. A part of her will forever be saddened by the loss of her Mother and Godmother.
It said that sailors have spotted mermaids on their voyage across the sea. Just that their too crafty & swift to be caught.
Historical Notes/context
The story originates from the indigenous island of Guam and has been shared for generations. There are multiple versions of the story. The term Sirena is not present in the Chamorro language; however, in Spanish, it refers to the mythological creature known as the mermaid. The capital of the Island is Hagåtña. The Hagåtña river flows beneath the Spanish Bridge, where it is possible to observe her monument. Many believe Sirena resided here. There is speculation that this story was crafted to frighten children into listening to their parents and not playing in the river's water, as it was a vital water source. This was more prevalent when the Spanish established Guam as a port for whaling, pirates, and trade during the Spanish Galleon trade era(16th century). It should be noted that the women's role in the Chamorro version reflects a matrilineal society, which can be seen with Grandma/Godmother.
👉I like to point out in this, Thomas Edison patented the light bulb in 1879. So visually the lights outside and inside the huts as flames. As Ai(text to image) does not account for these type of items.
👉This also goes back to be careful what you wish for because you may actual get your wish.
👉Chamorro people are Pacific Islanders, similar to Hawaiians, and have a brown/tan skin complexion.\
👉My mermaid looks strikingly similar to the Disney's recent version of Ariel. I Thought that was interesting because I put just "mermaid" into the text prompt. Its worth pointing out ai limitations as it's not advanced as I originally thought, likely due to its limited data. In this case only disney as a reference for mermaid.
based on ordinary prompts\
Prompt used:\
That's all-Thank you until next time (N)osyters!
If you like it, send me some ❤❤hearts❤ and if you didn't like it-⚡⚡🍑🍑zap⚡⚡🍑🍑 me!🍑🍑 me!
For email updates you can subscribe to my paragraph.xyz/\@onigirl or below if using the Yakihonne App
-
@ 1bc70a01:24f6a411
2024-03-30 01:27:45We’re all daily users of Nostr, so it can be easy to see things through an advanced user lens while forgetting what it felt like to be a newbie. I thought I would take some time to go over major client from the start in hopes of evaluating what it might feel like for a new user.
The other reason for running this review is to hopefully improve the overall nostr retention rate across clients. As it stands, according to nostr.band, retention of trusted users 30 days after signups trends to 0 for recent cohorts. This seems to be supported by the lack of growth in daily active users, with the average remaining in the 10,000-12,000 range for “trusted” pub keys.
The following report consists of several criteria which I felt were essential to basic first-time social media experience:
- Ease of signup
- Ease of logging in
- Ability to understand what you are looking at (sufficient explanations)
- Seeing a good initial feed
- Ability to follow something of interest
- Minimizing technical /dev lingo
- A fast scrolling experience
- Ability to easily upload media
- A good search experience overall
- Good keyword searching
- Hashtag searching
- Ability to follow hashtags
- Easily accessing followed hashtags
- Good experience reacting to notes
In total there are 140 points, 10 for each category. This is by far not the most comprehensive score card, but I felt it did a decent job covering most things you’d want to do in a social client.
Some notes of caution:
- This report and score card are meant to be a general quick glance at where your client may stand in overall UX. It does not differentiate between the intended target audiences.
- The criteria that I deem important may not be important to you as the founder / developer, so take it for what it’s worth. Adding your desired criteria may increase your score significantly. For example, I did not evaluate the zap experience, or thoroughly test nested replies.
- This report is not a substitute for proper user testing. It’s just one person’s observations. While we have done some user testing in the past, I highly recommend doing your own. You can do so by approaching and interviewing new users (if you are able to distinguish if they came from your client), or via other user testing software. Talk to me (@karnage) if you need some help getting set up.
- People’s reported experience regarding usability may vary greatly depending on their familiarity with cryptographic concepts, their background, and technical experience. What I may deem as a great score of 10, may not be a 10 for others. I have seen user tests where “obvious” things were not obvious to testers.
- This report only looks at the English language version of the client. The actual user experience for someone on a different language version of the app could be totally different from what is graded here. It’s worth considering geographies of where users are coming from and how they experience your client.
- I did not test re-activation of new users. Meaning, once they close the app, I did not test if they are pulled back by some notification or other means. This is a crucial aspect of any new app usage that should be considered carefully.
Tested Clients: Damus, Amethyst, Primal iOS, Snort (web), Iris (sort of), Coracle, Nostur.
I also tested Instagram and X/Twitter for comparison.
Results, highest points to lowest: Primal iOS: 136 Twitter: 125 Instagram: 109 Nostur: 108 Coracle: 99 Amethyst: 93 Snort: 90 Damus: 87 Iris: N/A Facebook: could not test.
My main takeaway was that among all apps (including Twitter and Instagram), the traditional apps win simply by having much better content selection. You get to see a variety of interesting things that Nostr simply can’t match. Going forward, this is an area I would probably recommend focusing on - how to engage people to post more interesting content, onboard creators etc… Nostr is lacking in content and I believe this could be the primary reason people are not sticking around after trying it.
Other Nostr Notes:
There seemed to be little of interesting topics to follow or stick around for. The experience of joining nostr doesn't feel special or different in any way opposed to X for example. Twitter has interesting accounts, TikTok has interesting videos, what does Nostr have? The lack of "popular" conent due to the generally low number of users is probably to blame. In a way we suffer from the chicken / egg problem where new users are needed to generate more content, and more content is needed to retain new users. Going forward, I think clients should think about ways to encourage users to share content (whether that be their own, or posted from other platforms). Nostr also does not seem to have any external growth loops. For example, there is no way to invite people to the platform by email with a single click (by accessing the address book). Even if a friend does manage to join and you can find them, they are in no way notified when tagged (as far as I know). People have to have a habit of opening the app to know if something is happening. The habit formation of using a new app is important in the early usage phase and nostr seems to have a weak spot here.
You can find all of the detailed scoring, notes for each client and other thoughts in this spreadsheet: https://docs.google.com/spreadsheets/d/14w8-aQ1sHfGBSuNpqvOA9i7PHNSfhn6lUOV6H293caw/edit?usp=sharing
-
@ 3bf0c63f:aefa459d
2024-03-19 14:32:01Censorship-resistant relay discovery in Nostr
In Nostr is not decentralized nor censorship-resistant I said Nostr is centralized. Peter Todd thinks it is centralized by design, but I disagree.
Nostr wasn't designed to be centralized. The idea was always that clients would follow people in the relays they decided to publish to, even if it was a single-user relay hosted in an island in the middle of the Pacific ocean.
But the Nostr explanations never had any guidance about how to do this, and the protocol itself never had any enforcement mechanisms for any of this (because it would be impossible).
My original idea was that clients would use some undefined combination of relay hints in reply tags and the (now defunct)
kind:2
relay-recommendation events plus some form of manual action ("it looks like Bob is publishing on relay X, do you want to follow him there?") to accomplish this. With the expectation that we would have a better idea of how to properly implement all this with more experience, Branle, my first working client didn't have any of that implemented, instead it used a stupid static list of relays with read/write toggle -- although it did publish relay hints and kept track of those internally and supportedkind:2
events, these things were not really useful.Gossip was the first client to implement a truly censorship-resistant relay discovery mechanism that used NIP-05 hints (originally proposed by Mike Dilger) relay hints and
kind:3
relay lists, and then with the simple insight of NIP-65 that got much better. After seeing it in more concrete terms, it became simpler to reason about it and the approach got popularized as the "gossip model", then implemented in clients like Coracle and Snort.Today when people mention the "gossip model" (or "outbox model") they simply think about NIP-65 though. Which I think is ok, but too restrictive. I still think there is a place for the NIP-05 hints,
nprofile
andnevent
relay hints and specially relay hints in event tags. All these mechanisms are used together in ZBD Social, for example, but I believe also in the clients listed above.I don't think we should stop here, though. I think there are other ways, perhaps drastically different ways, to approach content propagation and relay discovery. I think manual action by users is underrated and could go a long way if presented in a nice UX (not conceived by people that think users are dumb animals), and who knows what. Reliance on third-parties, hardcoded values, social graph, and specially a mix of multiple approaches, is what Nostr needs to be censorship-resistant and what I hope to see in the future.
-
@ 3bf0c63f:aefa459d
2024-03-06 13:04:06início
"Vocês vêem? Vêem a história? Vêem alguma coisa? Me parece que estou tentando lhes contar um sonho -- fazendo uma tentativa inútil, porque nenhum relato de sonho pode transmitir a sensação de sonho, aquela mistura de absurdo, surpresa e espanto numa excitação de revolta tentando se impôr, aquela noção de ser tomado pelo incompreensível que é da própria essência dos sonhos..."
Ele ficou em silêncio por alguns instantes.
"... Não, é impossível; é impossível transmitir a sensação viva de qualquer época determinada de nossa existência -- aquela que constitui a sua verdade, o seu significado, a sua essência sutil e contundente. É impossível. Vivemos, como sonhamos -- sozinhos..."
- Livros mencionados por Olavo de Carvalho
- Antiga homepage Olavo de Carvalho
- Bitcoin explicado de um jeito correto e inteligível
- Reclamações
-
@ 3bf0c63f:aefa459d
2024-01-29 02:19:25Nostr: a quick introduction, attempt #1
Nostr doesn't have a material existence, it is not a website or an app. Nostr is just a description what kind of messages each computer can send to the others and vice-versa. It's a very simple thing, but the fact that such description exists allows different apps to connect to different servers automatically, without people having to talk behind the scenes or sign contracts or anything like that.
When you use a Nostr client that is what happens, your client will connect to a bunch of servers, called relays, and all these relays will speak the same "language" so your client will be able to publish notes to them all and also download notes from other people.
That's basically what Nostr is: this communication layer between the client you run on your phone or desktop computer and the relay that someone else is running on some server somewhere. There is no central authority dictating who can connect to whom or even anyone who knows for sure where each note is stored.
If you think about it, Nostr is very much like the internet itself: there are millions of websites out there, and basically anyone can run a new one, and there are websites that allow you to store and publish your stuff on them.
The added benefit of Nostr is that this unified "language" that all Nostr clients speak allow them to switch very easily and cleanly between relays. So if one relay decides to ban someone that person can switch to publishing to others relays and their audience will quickly follow them there. Likewise, it becomes much easier for relays to impose any restrictions they want on their users: no relay has to uphold a moral ground of "absolute free speech": each relay can decide to delete notes or ban users for no reason, or even only store notes from a preselected set of people and no one will be entitled to complain about that.
There are some bad things about this design: on Nostr there are no guarantees that relays will have the notes you want to read or that they will store the notes you're sending to them. We can't just assume all relays will have everything — much to the contrary, as Nostr grows more relays will exist and people will tend to publishing to a small set of all the relays, so depending on the decisions each client takes when publishing and when fetching notes, users may see a different set of replies to a note, for example, and be confused.
Another problem with the idea of publishing to multiple servers is that they may be run by all sorts of malicious people that may edit your notes. Since no one wants to see garbage published under their name, Nostr fixes that by requiring notes to have a cryptographic signature. This signature is attached to the note and verified by everybody at all times, which ensures the notes weren't tampered (if any part of the note is changed even by a single character that would cause the signature to become invalid and then the note would be dropped). The fix is perfect, except for the fact that it introduces the requirement that each user must now hold this 63-character code that starts with "nsec1", which they must not reveal to anyone. Although annoying, this requirement brings another benefit: that users can automatically have the same identity in many different contexts and even use their Nostr identity to login to non-Nostr websites easily without having to rely on any third-party.
To conclude: Nostr is like the internet (or the internet of some decades ago): a little chaotic, but very open. It is better than the internet because it is structured and actions can be automated, but, like in the internet itself, nothing is guaranteed to work at all times and users many have to do some manual work from time to time to fix things. Plus, there is the cryptographic key stuff, which is painful, but cool.
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente. - **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados. - **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua. - **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer. - **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos. - **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas. - **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade). - **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis. - **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade. - **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho. - **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho. - **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma. - **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças. - **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família. - **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente. - **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo). - **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas. - **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente). - **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica. - **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar. - **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico. - **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais. - **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil). - **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória. - **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles. - **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas. - **inflação**: a inflação estimula o endividamento e o consumismo. - **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas. - **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16bitcoind
decentralizationIt is better to have multiple curator teams, with different vetting processes and release schedules for
bitcoind
than a single one."More eyes on code", "Contribute to Core", "Everybody should audit the code".
All these points repeated again and again fell to Earth on the day it was discovered that Bitcoin Core developers merged a variable name change from "blacklist" to "blocklist" without even discussing or acknowledging the fact that that innocent pull request opened by a sybil account was a social attack.
After a big lot of people manifested their dissatisfaction with that event on Twitter and on GitHub, most Core developers simply ignored everybody's concerns or even personally attacked people who were complaining.
The event has shown that:
1) Bitcoin Core ultimately rests on the hands of a couple maintainers and they decide what goes on the GitHub repository[^pr-merged-very-quickly] and the binary releases that will be downloaded by thousands; 2) Bitcoin Core is susceptible to social attacks; 2) "More eyes on code" don't matter, as these extra eyes can be ignored and dismissed.
Solution:
bitcoind
decentralizationIf usage was spread across 10 different
bitcoind
flavors, the network would be much more resistant to social attacks to a single team.This has nothing to do with the question on if it is better to have multiple different Bitcoin node implementations or not, because here we're basically talking about the same software.
Multiple teams, each with their own release process, their own logo, some subtle changes, or perhaps no changes at all, just a different name for their
bitcoind
flavor, and that's it.Every day or week or month or year, each flavor merges all changes from Bitcoin Core on their own fork. If there's anything suspicious or too leftist (or perhaps too rightist, in case there's a leftist
bitcoind
flavor), maybe they will spot it and not merge.This way we keep the best of both worlds: all software development, bugfixes, improvements goes on Bitcoin Core, other flavors just copy. If there's some non-consensus change whose efficacy is debatable, one of the flavors will merge on their fork and test, and later others -- including Core -- can copy that too. Plus, we get resistant to attacks: in case there is an attack on Bitcoin Core, only 10% of the network would be compromised. the other flavors would be safe.
Run Bitcoin Knots
The first example of a
bitcoind
software that follows Bitcoin Core closely, adds some small changes, but has an independent vetting and release process is Bitcoin Knots, maintained by the incorruptible Luke DashJr.Next time you decide to run
bitcoind
, run Bitcoin Knots instead and contribute tobitcoind
decentralization!
See also:
[^pr-merged-very-quickly]: See PR 20624, for example, a very complicated change that could be introducing bugs or be a deliberate attack, merged in 3 days without time for discussion.
-
@ 97c70a44:ad98e322
2024-03-23 04:34:58The last few days on developer nostr have involved quite a kerfluffle over the gossip model, blastr, banning jack, and many related misunderstandings. This post is an attempt to lay out my thoughts on the matter in an organized and hopefully helpful way.
What's wrong with gossip?
It all started with a post from jack asking why more devs haven't implemented the gossip model. There are many answers to this question, not least having to do with there being two standards for user relay selections, and ongoing changes to NIP 65. But I don't want to talk about compatibility here.
nevent1qydhwumn8ghj7argv4nx7un9wd6zumn0wd68yvfwvdhk6tcprfmhxue69uhhq7tjv9kkjepwve5kzar2v9nzucm0d5hszymhwden5te0wfjkccte9enrw73wd9hj7qpq2uf488j3uy084kpsn594xcef9g9x3lplx4xnglf0xwghyw2n3tfqqnrm02
Mazin responded with some numbers which estimate how many connections the gossip model requires. Too many connections can become expensive for low-power clients like mobile phones, not to mention some privacy issues stemming from nosy relays.
nevent1qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qgewaehxw309amk2mrrdakk2tnwdaehgu3wwa5kuef0qyghwumn8ghj7mn0wd68ytnhd9hx2tcqyp2xzsjktypudzmygplljkupmuyadzzr6rkgnvx9e0fx3zwhdm0vkz4ceg7
I have some minor disagreements with Mazin's numbers, but I basically agree with his point — a purist gossip model, where a large proportion of nostr users run their own relays results in a high number of connections to different relays. I brought this question up late last year in my interview with Mike Dilger and in a conversation with fiatjaf, who convinced me that in practice, this doesn't matter — enough people will use a handful of larger hubs that there will be a good amount of overlap in relay selections between most pubkeys.
To articulate this more clearly: the goal is not "personal web nodes", which is a pipe dream the Farcasters and BlueSkys (BlueSkies?) of the world aim at, but a more pragmatic mix between large hubs and smaller purpose-built relays. These small relays might be outlets for large publishers, small groups, or nerds who also run their own SMTP servers and lightning nodes.
The point of the gossip model is that these small nodes be possible to run, and discoverable from the rest of the network so that we can preserve the censorship-resistant qualities of nostr that brought us here in the first place.
Blast It!
It's no secret that I've long been a critic of Mutiny's blastr relay implementation. My main objection is that the blastr approach doesn't account for the hard limits involved in scaling smaller relays. If the goal is to cross-pollinate notes across all relays in the network, all relays will require the same size database, and contain all notes in the network. This works right now (sort of), but as the network grows, the relays running on a $5 VPS are going to have their disks fill up and will inevitably fall over.
nevent1qyvhwumn8ghj76r0v3kxymmy9ehx7um5wgcjucm0d5hszxnhwden5te0wpuhyctdd9jzuenfv96x5ctx9e3k7mf0qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qqs07jr9qx49h53nhw76u7c3up2s72k7le2zj94h5fugmcgtyde4j9qfrnwxj
Not only that, but the content breakdown on any given relay by default becomes an undifferentiated soup of "GM", chinese notes, bots, bitcoin memes, and porn. Blastr makes it impossible to run an interesting relay without implementing write policies.
Which is actually fine! Because that's always been true — servers that allow anonymous uploads always get abused. Tony is just helpfully pointing out to us that this is no less true of nostr relays. I only wish he could have waited a little longer before mounting his attack on the network, because lots of hobbyists are interested in running interesting relays, but the tools don't yet exist to protect those servers from unsolicited notes.
One other note on blastr — Tony at one point described blastr as a relay proxy. This is an interesting perspective, which puts things in a different light. More on proxies later.
Ban Jack?
Here's a thought experiment: how might we actually "ban blastr"? @Pablof7z suggested to me in a conversation that you could configure your relay to check every note that gets published to your relay against the big nostr hubs, and if it exists on any of them to simply delete it. Of course, that would result in your relay being basically empty, and the hubs having all of your content. That's game theory for you I guess.
Another approach that was floated was to encourage users to only publish to small relays. In theory, this would force clients to implement gossip so users could still see the content they were subscribed to. Fiatjaf even posted two identical notes, one to his personal relay, and one to a hub to see which would get more engagement. The note posted to the mainstream relay got 10x more replies and likes than the more obscure note.
nostr:nevent1qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qgmwaehxw309aex2mrp0yh8wetnw3jhymnzw33jucm0d5hszymhwden5te0wp6hyurvv4cxzeewv4ej7qpqdc2drrmdmlkcyna5kkcv8yls4f8zaj82jjl00xrh2tmmhw3ejsmsmp945r
Of course, this is thwarted by blastr, since blastr not only replicates notes posted to it, it also actively crawls the network as well. So the next logical step in this train of thought would be for hubs to encourage people to use small relays by actively blocking high-profile accounts.
nostr:nevent1qydhwumn8ghj7argv4nx7un9wd6zumn0wd68yvfwvdhk6tcpzdmhxue69uhhyetvv9ujue3h0ghxjme0qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qpqpjhnn69lej55kde9l64jgmdkx2ngy2yk87trgjuzdte2skkwwnhqv5esfq
This would of course never happen (Damus is one client that hasn't implemented NIP 65, and they also run the biggest relay), but it was a fun thought experiment. At any rate, the silliness of the suggestion didn't stop certain people from getting offended that we would "disrupt the free market" by "forcing" our opinions on everyone else. Oh well.
Death to Blastr
In reality, even though blastr makes it a little harder to adopt gossip in the short term, its days are numbered. Eventually, relay operators will start to feel the pain of unsolicted notes, and will either shut their relays down or look for tools that will help them curate the content they host.
From my perspective, these tools take two forms — read protection and write protection. This is something I alluded to in my talk at Nostrasia last November.
Write protection is straightforward — already many relays have access control lists based on active subscriptions, invite codes, or just static whitelists that determine who is allowed to post to a given relay, or what event authors are represented there. This approach effectively prevents blastr from using relays as free storage, which is a huge improvement.
Read protection is more tricky, because anything publicly readable will be scraped by blastr and replicated to unauthenticated-write relays across the network. In most cases, this is ok, but there are use cases for relays to exist that host a unique collection of notes oriented around some organizing principle. Unfortunately, with blastr in action (or any scraper that might exist), the only way to do this is to actively protect proprietary content. There are a few approaches that can work to make this happen:
- IP-based access control lists
- AUTH-based access control lists
- Stripping signatures when serving events
- Storing and serving encrypted content
Each of these approaches has its own set of trade-offs. But depending on use case, any of them or a combination of them could work to allow relay operators to carve out their own piece of the nostr-verse. In fact, this is a big part of what Coracle is about — the white-labeled version of the product confines certain notes to proprietary relays, with optional encrypted group support.
Enough of my polemic against blastr. Let's talk about how to make the gossip model actually work.
Hints are pointless
Right now, clients that implement the gossip model rely pretty heavily on relay hints to find related notes — whether user profiles, reply parents, or community definitions. The problem with hints is that they are prone to link rot. Many of the relays that were set up a year ago when nostr took off are no longer online, and yet they persist in user relay lists, and in relay hints. These hints can't be updated — they are set in stone. What this means is that a different mechanism has to be used to find the notes the hints were supposed to help locate.
Because of this, I've come around to the position that hints are basically pointless. They are fine as a stopgap, and might be appropriate for certain obscure and ill-defined use cases where relay urls are the most durable address type available. But they provide basically no value in supporting the long-term robustness of the network.
What are durable, however, are pubkeys. Pubkeys are available pretty much everywhere, except in event id hints — and there is a proposal in the works to add a pubkey to those too. The cool thing about pubkeys as hints is that once you have a pubkey, all you need to do is find that person's kind 10002 inbox/outbox selections, and you should be able to find any note they have published.
This goes with the caveat that when users change their relay selections, or rotate their key, they (or their relays) should be sure to copy their notes to the new relay/pubkey.
The question then is: how do I find a given pubkey's relay selections?
There are already several mechanisms that make this reasonably easy. First of all, NIP 65 explicitly recommends publishing relay selections to a wide range of relays. This is a place where the blastr approach is appropriate. As a result, relay selections are usually available on the most popular public relays. Then there are special purpose relays like purplepag.es, which actively seek out these notes and index them.
These indexes are not confined to relays either. It would be trivial to create a DVM that you could ask for a pubkey's relay selections, optionally for a fee. Alex Gleason's proxy tag could also be used to indicate indexes that exist outside the nostr network — whether that be torrents, DHT keys, or what have you.
The best part is that this doesn't negatively impact the decentralization of the network because in principle these indexes are stateless — in other words, they're easily derived from the state of the public part of the nostr network.
Just do it for me
Looping back to where we started — the complexity and technical challenges of implementing the gossip model — there is a simple solution that many people have experimented with in different ways that could solve both issues at once: proxies.
As I mentioned above, Tony thinks of blastr as a proxy, and he's right. More specifically, it's a write-proxy. This is only part of its functionality (it also acts as an independent agent which crawls the network. EDIT: apparently this is not true!), but it is an essential part of how people use it.
Another kind of proxy is a read proxy. There are several implementations of these, including my own multiplextr proxy, which is gossip-compatible (although it requires a wrapper protocol for use). The advantage of a proxy like this is that it can reduce the number of connections a client has to open, and the number of duplicate events it has to download.
Proxies can do all kinds of fancy things in the background too, like managing the gossip model on behalf of the client, building an index of everything the user would be likely to ask for in advance to speed up response times, and more.
One interesting possibility is that a NIP 46 signer could double as a proxy, reducing the number of round trips needed. And since a signer already has access to your private key, this kind of proxy would not result in an escalation in permissions necessary for the proxy to work.
It's simple
The number of cool and creative solutions to the content replication and indexing problem is huge, and certainly doesn't end with blastr. Just to summarize the next steps I'm excited to see (to be honest, I want to build them myself, but we all know how that goes):
- More clients supporting gossip
- Gossip implementations maturing (Coracle's still has some issues that need to be worked out)
- A shift from relying on relay hints to relying on pubkey hints + relay selection indexes of some kind
- Proxy/signer combos which can take on some of the heavy lifting for clients of delivering events to the right inboxes, and pulling events from the right outboxes
Let's get building!
-
@ ee11a5df:b76c4e49
2024-03-22 23:49:09Implementing The Gossip Model
version 2 (2024-03-23)
Introduction
History
The gossip model is a general concept that allows clients to dynamically follow the content of people, without specifying which relay. The clients have to figure out where each person puts their content.
Before NIP-65, the gossip client did this in multiple ways:
- Checking kind-3 contents, which had relay lists for configuring some clients (originally Astral and Damus), and recognizing that wherever they were writing our client could read from.
- NIP-05 specifying a list of relays in the
nostr.json
file. I added this to NIP-35 which got merged down into NIP-05. - Recommended relay URLs that are found in 'p' tags
- Users manually making the association
- History of where events happen to have been found. Whenever an event came in, we associated the author with the relay.
Each of these associations were given a score (recommended relay urls are 3rd party info so they got a low score).
Later, NIP-65 made a new kind of relay list where someone could advertise to others which relays they use. The flag "write" is now called an OUTBOX, and the flag "read" is now called an INBOX.
The idea of inboxes came about during the development of NIP-65. They are a way to send an event to a person to make sure they get it... because putting it on your own OUTBOX doesn't guarantee they will read it -- they may not follow you.
The outbox model is the use of NIP-65. It is a subset of the gossip model which uses every other resource at it's disposal.
Rationale
The gossip model keeps nostr decentralized. If all the (major) clients were using it, people could spin up small relays for both INBOX and OUTBOX and still be fully connected, have their posts read, and get replies and DMs. This is not to say that many people should spin up small relays. But the task of being decentralized necessitates that people must be able to spin up their own relay in case everybody else is censoring them. We must make it possible. In reality, congregating around 30 or so popular relays as we do today is not a problem. Not until somebody becomes very unpopular with bitcoiners (it will probably be a shitcoiner), and then that person is going to need to leave those popular relays and that person shouldn't lose their followers or connectivity in any way when they do.
A lot more rationale has been discussed elsewhere and right now I want to move on to implementation advice.
Implementation Advice
Read NIP-65
NIP-65 will contain great advice on which relays to consult for which purposes. This post does not supersede NIP-65. NIP-65 may be getting some smallish changes, mostly the addition of a private inbox for DMs, but also changes to whether you should read or write to just some or all of a set of relays.
How often to fetch kind-10002 relay lists for someone
This is up to you. Refreshing them every hour seems reasonable to me. Keeping track of when you last checked so you can check again every hour is a good idea.
Where to fetch events from
If your user follows another user (call them jack), then you should fetch jack's events from jack's OUTBOX relays. I think it's a good idea to use 2 of those relays. If one of those choices fails (errors), then keep trying until you get 2 of them that worked. This gives some redundancy in case one of them is censoring. You can bump that number up to 3 or 4, but more than that is probably just wasting bandwidth.
To find events tagging your user, look in your user's INBOX relays for those. In this case, look into all of them because some clients will only write to some of them (even though that is no longer advised).
Picking relays dynamically
Since your user follows many other users, it is very useful to find a small subset of all of their OUTBOX relays that cover everybody followed. I wrote some code to do this as (it is used by gossip) that you can look at for an example.
Where to post events to
Post all events (except DMs) to all of your users OUTBOX relays. Also post the events to all the INBOX relays of anybody that was tagged or mentioned in the contents in a nostr bech32 link (if desired). That way all these mentioned people are aware of the reply (or quote or repost).
DMs should be posted only to INBOX relays (in the future, to PRIVATE INBOX relays). You should post it to your own INBOX relays also, because you'll want a record of the conversation. In this way, you can see all your DMs inbound and outbound at your INBOX relay.
Where to publish your user's kind-10002 event to
This event was designed to be small and not require moderation, plus it is replaceable so there is only one per user. For this reason, at the moment, just spread it around to lots of relays especially the most popular relays.
For example, the gossip client automatically determines which relays to publish to based on whether they seem to be working (several hundred) and does so in batches of 10.
How to find replies
If all clients used the gossip model, you could find all the replies to any post in the author's INBOX relays for any event with an 'e' tag tagging the event you want replies to... because gossip model clients will publish them there.
But given the non-gossip-model clients, you should also look where the event was seen and look on those relays too.
Clobbering issues
Please read your users kind 10002 event before clobbering it. You should look many places to make sure you didn't miss the newest one.
If the old relay list had tags you don't understand (e.g. neither "read" nor "write"), then preserve them.
How users should pick relays
Today, nostr relays are not uniform. They have all kinds of different rule-sets and purposes. We severely lack a way to advice non-technical users as to which relays make good OUTBOX relays and which ones make good INBOX relays. But you are a dev, you can figure that out pretty well. For example, INBOX relays must accept notes from anyone meaning they can't be paid-subscription relays.
Bandwidth isn't a big issue
The outbox model doesn't require excessive bandwidth when done right. You shouldn't be downloading the same note many times... only 2-4 times depending on the level of redundancy your user wants.
Downloading 1000 events from 100 relays is in theory the same amount of data as downloading 1000 events from 1 relay.
But in practice, due to redundancy concerns, you will end up downloading 2000-3000 events from those 100 relays instead of just the 1000 you would in a single relay situation. Remember, per person followed, you will only ask for their events from 2-4 relays, not from all 100 relays!!!
Also in practice, the cost of opening and maintaining 100 network connections is more than the cost of opening and maintaining just 1. But this isn't usually a big deal unless...
Crypto overhead on Low-Power Clients
Verifying Schnorr signatures in the secp256k1 cryptosystem is not cheap. Setting up SSL key exchange is not cheap either. But most clients will do a lot more event signature validations than they will SSL setups.
For this reason, connecting to 50-100 relays is NOT hugely expensive for clients that are already verifying event signatures, as the number of events far surpasses the number of relay connections.
But for low-power clients that can't do event signature verification, there is a case for them not doing a lot of SSL setups either. Those clients would benefit from a different architecture, where half of the client was on a more powerful machine acting as a proxy for the low-power half of the client. These halves need to trust each other, so perhaps this isn't a good architecture for a business relationship, but I don't know what else to say about the low-power client situation.
Unsafe relays
Some people complain that the outbox model directs their client to relays that their user has not approved. I don't think it is a big deal, as such users can use VPNs or Tor if they need privacy. But for such users that still have concerns, they may wish to use clients that give them control over this. As a client developer you can choose whether to offer this feature or not.
The gossip client allows users to require whitelisting for connecting to new relays and for AUTHing to relays.
See Also
-
@ 42342239:1d80db24
2024-03-21 09:49:01It has become increasingly evident that our financial system has started undermine our constitutionally guaranteed freedoms and rights. Payment giants like PayPal, Mastercard, and Visa sometimes block the ability to donate money. Individuals, companies, and associations lose bank accounts — or struggle to open new ones. In bank offices, people nowadays risk undergoing something resembling being cross-examined. The regulations are becoming so cumbersome that their mere presence risks tarnishing the banks' reputation.
The rules are so complex that even within the same bank, different compliance officers can provide different answers to the same question! There are even departments where some of the compliance officers are reluctant to provide written responses and prefer to answer questions over an unrecorded phone call. Last year's corporate lawyer in Sweden recently complained about troublesome bureaucracy, and that's from a the perspective of a very large corporation. We may not even fathom how smaller businesses — the keys to a nation's prosperity — experience it.
Where do all these rules come?
Where do all these rules come from, and how well do they work? Today's regulations on money laundering (AML) and customer due diligence (KYC - know your customer) primarily originate from a G7 meeting in the summer of 1989. (The G7 comprises the seven advanced economies: the USA, Canada, the UK, Germany, France, Italy, and Japan, along with the EU.) During that meeting, the intergovernmental organization FATF (Financial Action Task Force) was established with the aim of combating organized crime, especially drug trafficking. Since then, its mandate has expanded to include fighting money laundering, terrorist financing, and the financing of the proliferation of weapons of mass destruction(!). One might envisage the rules soon being aimed against proliferation of GPUs (Graphics Processing Units used for AI/ML). FATF, dominated by the USA, provides frameworks and recommendations for countries to follow. Despite its influence, the organization often goes unnoticed. Had you heard of it?
FATF offered countries "a deal they couldn't refuse"
On the advice of the USA and G7 countries, the organization decided to begin grading countries in "blacklists" and "grey lists" in 2000, naming countries that did not comply with its recommendations. The purpose was to apply "pressure" to these countries if they wanted to "retain their position in the global economy." The countries were offered a deal they couldn't refuse, and the number of member countries rapidly increased. Threatening with financial sanctions in this manner has even been referred to as "extraterritorial bullying." Some at the time even argued that the process violated international law.
If your local Financial Supervisory Authority (FSA) were to fail in enforcing compliance with FATF's many checklists among financial institutions, the risk of your country and its banks being barred from the US-dominated financial markets would loom large. This could have disastrous consequences.
A cost-benefit analysis of AML and KYC regulations
Economists use cost-benefit analysis to determine whether an action or a policy is successful. Let's see what such an analysis reveals.
What are the benefits (or revenues) after almost 35 years of more and more rules and regulations? The United Nations Office on Drugs and Crime estimated that only 0.2% of criminal proceeds are confiscated. Other estimates suggest a success rate from such anti-money laundering rules of 0.07% — a rounding error for organized crime. Europol expects to recover 1.2 billion euros annually, equivalent to about 1% of the revenue generated in the European drug market (110 billion euros). However, the percentage may be considerably lower, as the size of the drug market is likely underestimated. Moreover, there are many more "criminal industries" than just the drug trade; human trafficking is one example - there are many more. In other words, criminal organizations retain at least 99%, perhaps even 99.93%, of their profits, despite all cumbersome rules regarding money laundering and customer due diligence.
What constitutes the total cost of this bureaurcratic activity, costs that eventually burden taxpayers and households via higher fees? Within Europe, private financial firms are estimated to spend approximately 144 billion euros on compliance. According to some estimates, the global cost is twice as high, perhaps even eight times as much.
For Europe, the cost may thus be about 120 times (144/1.2) higher than the revenues from these measures. These "compliance costs" bizarrely exceed the total profits from the drug market, as one researcher put it. Even though the calculations are uncertain, it is challenging — perhaps impossible — to legitimize these regulations from a cost-benefit perspective.
But it doesn't end there, unfortunately. The cost of maintaining this compliance circus, with around 80 international organizations, thousands of authorities, far more employees, and all this across hundreds of countries, remains a mystery. But it's unlikely to be cheap.
The purpose of a system is what it does
In Economic Possibilities for our Grandchildren (1930), John Maynard Keynes foresaw that thanks to technological development, we could have had a 15-hour workweek by now. This has clearly not happened. Perhaps jobs have been created that are entirely meaningless? Anthropologist David Graeber argued precisely this in Bullshit Jobs in 2018. In that case, a significant number of people spend their entire working lives performing tasks they suspect deep down don't need to be done.
"The purpose of a system is what it does" is a heuristic coined by Stafford Beer. He observed there is "no point in claiming that the purpose of a system is to do what it constantly fails to do. What the current regulatory regime fails to do is combat criminal organizations. Nor does it seem to prevent banks from laundering money as never before, or from providing banking services to sex-offending traffickers
What the current regulatory regime does do, is: i) create armies of meaningless jobs, ii) thereby undermining mental health as well as economic prosperity, while iii) undermining our freedom and rights.
What does this say about the purpose of the system?
-
@ 20d29810:6fe4ad2f
2024-03-15 20:51:56 -
@ fa984bd7:58018f52
2024-02-28 22:15:25I have recently launched Wikifreedia, which is a different take on how Wikipedia-style systems can work.
Yes, it's built on nostr, but that's not the most interesting part.
The fascinating aspect is that there is no "official" entry on any topic. Anyone can create or edit any entry and build their own take about what they care about.
Think the entry about Mao is missing something? Go ahead and edit it, you don't need to ask for permission from anyone.
Stuart Bowman put it best on a #SovEng hike:
The path to truth is in the integration of opposites.
Since launching Wikifreedia, less than a week ago, quite a few people asked me if it would be possible to import ALL of wikipedia into it.
Yes. Yes it would.
I initially started looking into it to make it happen as I am often quick to jump into action.
But, after thinking about it, I am not convinced importing all of Wikipedia is the way to go.
The magical thing about building an encyclopedia with no canonical entry on any topic is that each individual can bring to light the part they are interested the most about a certain topic, it can be dozens or hundreds, or perhaps more, entries that focus on the edges of a topic.
Whereas, Wikipedia, in their Quijotean approach to truth, have focused on the impossible path of seeking neutrality.
Humans can't be neutral, we have biases.
Show me an unbiased human and I'll show you a lifeless human.
Biases are good. Having an opinion is good. Seeking neutrality is seeking to devoid our views and opinions of humanity.
Importing Wikipedia would mean importing a massive amount of colorless trivia, a few interesting tidbits, but, more important than anything, a vast amount of watered-down useless information.
All edges of the truth having been neutered by a democratic process that searches for a single truth via consensus.
"What's the worst that could happen?"
Sure, importing wikipedia would simply be one more entry on each topic.
Yes.
But culture has incredibly strong momentum.
And if the culture that develops in this type of media is that of exclusively watered-down comfortable truths, then some magic could be lost.
If people who are passionate or have a unique perspective about a topic feel like the "right approach" is to use the wikipedia-based article then I would see this as an extremely negative action.
An alternative
An idea we discussed on the #SovEng hike was, what if the wikipedia entry is processed by different "AI agents" with different perspectives.
Perhaps instead of blankly importing the "Napoleon" article, an LLM trained to behave as a 1850s russian peasant could be asked to write a wiki about Napoleon. And then an agent tried to behave like Margaret Thatcher could write one.
Etc, etc.
Embrace the chaos. Embrace the bias.
-
@ d8a2c33f:76611e0c
2024-02-26 03:10:36Let's start with definitions:
Cashu - Cashu is a free and open-source Chaumian ecash system built for Bitcoin. Cashu offers near-perfect privacy for users of custodial Bitcoin applications. Nobody needs to know who you are, how much funds you have, and who you transact with. - more info here https://cashu.space/
Cashu-adress - it is a protocol that runs on top of cashu mints. More info here: https://docs.cashu-address.com/
Npub.cash - service that runs on top of cashu-address protocol. Let's you use your npub@npub.cash as a lightning address to receive zaps or incoming sats. More info here https://npub.cash/
Pay to public key - Like bitcoin you can cashu tokens are bearer assets and can be locked to a public key. This means only the person who has the private key can use that token.
What do you use it for? - Use your npub cash as your lightning address replacement.
Why? I believe everyone knows what a lightning address is. Lightning address is usually provided by your wallet provider who also runs their lightning node. E.g. if you have @alby.com lightning address then you already have an account setup with alby and using their lightning node to get your sats. With npub.cash you can simply put your npubaddress@npub.cash without registering first or signing up. And you can keep this forever even while changing your lightning wallet from alby to any other provider or running your own lightning node.
Npub.cash locks the received tokens into your nostr public key (Still work in progress) so that only the owner of Nostr public key who has the private key can claim the token and push it to their wallet of choice.
Benefits Over Traditional Custodial Lightning Addresses:
Privacy: With Cashu-Address, your financial activities are not visible to your custodian. This added layer of privacy ensures that your transactions remain your own.
Flexibility in Custodian Selection: Unlike traditional Lightning Addresses, Cashu-Address allows you to choose your custodian freely. If your needs change, you can switch your custodian anytime without hassle.
No User Exclusion: Custodians cannot exclude individual users. Because mints do know know which user eCash belongs to, they can not censor certain users. How to claim your username@npub.cash - This is what makes it so cool. You can actually get a vanity username for your profile name. E.g. I got mine as starbuilder@npub.cash. Simply visit npub.cash website and click the get username button. Put in your vanity username and pay 5k sats to get your profile@npub.cash
How to claim your username@npub.cash- This is what makes it so cool. You can actually get a vanity username for your profile name. E.g. I got mine as starbuilder@npub.cash. Simply visit npub.cash website and click the get username button. Put in your vanity username and pay 5k sats to get your profile@npub.cash . This is onetime and you can use this address forever..!
Frequently asked Questions:
-
What connect string to use to login with NIP-46 on iOS. - try setting up an account at https://use.nsec.app/home . This is still WIP. If it does not work, just try the Nostr extension using laptop browser
-
Is there a notification when I receive sats/zaps? Not yet. We are working on it. However, just go to npub.cash every couple of days to sweep your collected sats to your wallet
-
Which cashu wallet to use? Use enuts (android & ios) https://www.enuts.cash/ or minibits.cash for android https://www.minibits.cash/
-
Will anyone be able to claim my sats? NO. When you start the claim process you need to sign with your Nostr extenstion signer to claim sats. So only the person controlling the private key to the npub can claim sats.
-
I really struggle to understand the flow of what happened concretely. - Don't break your head. the devs got you covered. Also, this is super new and everyone is trying to get their heads around. Just follow instructions, play with it and ask for help.
- Is it non-custodial? - When we lock your sats to your pubkey we cannot spend it. However locking to pubkey is not enabled until wallets can start supporting them.
- How do I ask for help? - just post #asknpub.cash and post your questions. We will get to you.
- Who are the devs behind this? @calle (Master of Cashu) @egge (Core dev) @starbuilder (dev support)
Here's what everyone is talking about #npub.cash on Nostr
nostr:nevent1qvzqqqqqqypzqx78pgq53vlnzmdr8l3u38eru0n3438lnxqz0mr39wg9e5j0dfq3qythwumn8ghj7um9v9exx6pwdehhxtn5dajxz7f0qqsxcwntv9d342ashe5yvyv0fg40wm873jaszt6d2u0209vqz5gkcdq6avwyr
nostr:nevent1qvzqqqqqqypzq5qzedy85msr57qayz6dz9dlcr5k40mcqtvm5nhyn466qgc6p4kcqythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qythwumn8ghj7um9v9exx6pwdehhxtn5dajxz7f0qqsvek7d8v4lddrmj2mynegsnrc4r4gnmswkddm02qzenwuc7x9perctaedre
nostr:nevent1qvzqqqqqqypzpfpqfrt75fhfcd4x0d0lyek9pzyz4zwmudh0vq7vn3njvvngsmpjqyghwumn8ghj7mn0wd68ytnhd9hx2tcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsqgyjwfsgr80xmha8x2wwd4klzsxcagmlpk3wsfyyvqnlvzn2rcvnhu534kj5
nostr:nevent1qvzqqqqqqypzp3yw98cykjpvcqw2r7003jrwlqcccpv7p6f4xg63vtcgpunwznq3qyfhwumn8ghj7mmxve3ksctfdch8qatz9uq36amnwvaz7tmwdaehgu3wvf5hgcm0d9hx2u3wwdhkx6tpdshsqg8v5hfgmwangecpejcw22fm4uk2s438p8ahpu5l985wserctr0h6gysle9m
nostr:nevent1qvzqqqqqqypzp6y2dy0f3kvc0jty2gwl7cqztas8qqmc5jrerqxuhw622qnc2pq3qy88wumn8ghj7mn0wvhxcmmv9uq3xamnwvaz7tm0venxx6rpd9hzuur4vghsqgx68ujht0r9qqqp4l27u0x27sg4p5l2ks0xd9kxm9w2vkjhppttlur0et6k
-
-
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ ec965405:63996966
2024-02-15 01:06:05I am beginning to see the clarity that my mentors promised I would as I progressed through my late 20s into my 30s, and it's getting clearer every day. I am inspired to change the world and bring my community with me. I know God has my back. A better world is within our grasp! I'm going to do my part in bringing my community with me by blogging about my upcoming trip to Cuba with Solidarity Collective via Nostr.
In February I'll be back in the skies headed to Havana, where I will participate in a delegation with Solidarity Collective to learn about Pan Africanism in the Cuban context. Some questions we will be exploring during this delegation are:
How do Cubans, in a Black-majority country, approach environmental protection, religion, housing rights, and healthcare?
What is the role of historic and contemporary abolitionist practices in their quest to eradicate racism?
What challenges remain to build an equitable society, especially under the yoke of 60 years of the u.s. Blockade?
What do these lessons mean for the struggle for black liberation in the u.s.?
I've dreamed about the next time I would visit Cuba and how I would track down the friends I made there in 2017. At that time, the government controlled access to internet via these cards that you would purchase then redeem on your device for timed access. The idea was that you would take your Wi-Fi card and head to a communal place like La Plaza with your device to access the Internet with others.
While some north americans might find that kind of Internet access draconian, surfing the web in public like that made me value my time on the Internet more. Has this changed since I was last there? I am personally interested in how groups are leveraging tech and the Internet for education and organizing. I now have a solid couple of years of IT/programming education to reference while I meet with teachers and journalists at the Martin Luther King Jr. Center and hear about the right to free education from daycare through university and literacy campaigns. I wonder if they've heard about decentralized social media protocols like Nostr or Activitypub or if they ever experienced censorship from the authorities on the Internet.
I recently experienced censorship in the YouTube comments as I explained to fellow web surfers why we must include Vieques and the other islands in the archipelago when talking about Puerto Rico politically. My ability to comment was restricted as I tried to convince others who talked down on Haiti and Cuba as failed states to instead take my Pan Caribbean perspective. I really enjoyed Dread's talk at Nostrasia 2023 about how he is using Bitcoin and Nostr to bring the islands together as the US Dollar and financial institutions like Western Union and the IMF keep us divided and oppressed.
The more I learn about Bitcoin as a tool for global wealth distribution, the more I understand how these institutions rob youth and families of basic necessities and facilitate the rise of authoritarian regimes and systems that punish journalists and activists through political repression. The corporate ownership of our means of internet communication by the likes of technocrats like Musk and Zuckerberg won't let authentic conversation between Caribbean-based diaspora happen on their platforms while they get to destroy countries like Myanmar and shape public discourse to their whim. That's why I'm glad I found Nostr.
My personal blog currently lives on my Uberspace asteroid in a Bludit instance that lacks much functionality outside of themes and data analytics, so it's just sits there as a personal repo for my thoughts. Nostr provides all of this with a direct link to my Bitcoin wallet address and comment functionality. If people value my content, I can get "zapped" and earn money for my content. I can now engage with my audience directly without a middle man. No Substack, no moderators censoring my messages, just community. The job now is to bridge my community and this new way of socializing on the Internet.
To help make this as educational of an experience as possible, I ask my audience: What questions or feedback do you have about my trip and the types of questions I want to explore? Is there anything you've ever wondered about Cuba? What suggestions do you have in terms of how I can better present information; written word, audio interviews, video, or photo essays?
Leave me some love in the comments and stay tuned!
-
@ 6871d8df:4a9396c1
2024-02-05 23:29:22The Apple Vision Pro was released, and it is the talk of the town right now. To be frank, I think it's a stupid product. It's not useful or particularly enticing to me in its current form factor. It's a toy, not a tool. All the features seem gimmicky as opposed to generally helpful. I'm not saying it may never be worthwhile, but as of this first release, it is only a party trick.
Coincidently, this is the first product that does not have any of Steve Jobs' influence. To me, it shows. I don't think Jobs would have ever let this product see the light of day.
Jobs understood product. He didn't make things for the sake of progress or to make sci-fi reality; he made things because he 'wanted to make a dent in the world.' He wanted to solve problems for everyday people by innovating with cutting-edge technology. He aspired to make people's lives better. Steve Jobs' genius was the way he married cutting-edge technologies with valuable tools that made those cutting-edge technologies simple and elegant.
The Vision Pro may be technically innovative, but it is not a tool, at least in its current form. It may be one day, but that is precisely my point; Jobs would have never released a product where the form factor would hold it back from becoming a tool. At best, it is an intriguing toy that is additive to some content at the behest of being very awkward to wear or be seen in. In my view, mainstream adoption can happen only in a world where we can use the Vision Pro as a contact lens or very small, discreet, minimalist glasses, but certainly not this iteration where it covers more than half your face.
Seeing people's eyes makes us human. So much emotion, understanding, and communication happens with just a look. It is a window into the soul. I don't want to live in a world where we are actively bringing all the negatives about communicating in the digital world to the physical one.
https://image.nostr.build/2365609411f144f5d789ffd684ffce9b4d867626a7bfe11bb311cb0f61057199.jpg
I can't help but wonder or hypothesize what Steve Jobs would focus on if he were still alive today. I think Apple's priorities would be completely different. My gut tells me he would not have let Siri get as bad as it is. Siri is a horrible product; I never use it, and everyone I know who tries to use it does so unsuccessfully, at least most of the time. I personally always default to ChatGPT or Bard. These AI systems make my life dramatically more productive. They are tools in the purest sense.
In my opinion, Steve would not have missed this train. Sure, Apple could wake up and integrate these systems into Siri — if they were smart, they would — but in its current form, it is so far behind that it almost astounds me. My intuition leads me to believe he would be closer to what [Rabbit] is doing.
Who knows? Maybe I am entirely wrong, and Apple just kickstarted VR's mass adoption phase. Unfortunately, I think this will likely be the biggest failure of a physical product that Apple will have seen since Jobs returned ages ago. The Vision Pro is only slightly better than the Oculus, and Facebook has already deprioritized VR for AI. Apple is further behind, and I don't see a world where they do not make the same pivot. There is a skill to creating successful, innovative products, and it makes me sad to see the torch that Jobs passed lose its flame. As someone who respected how Apple used to innovate, watching this decay in real-time is jarring as this is just the latest in a string of disappointing decisions that exemplify that 'peak Apple' is behind us.
-
@ 0d6c8388:46488a33
2024-02-02 01:15:40I'm just a normal guy that likes Jesus and bitcoin!
Here are some things you should know about me:
Well that's all for now hope you have a good day!
-
@ 6ad3e2a3:c90b7740
2024-01-29 20:45:19“God does not play dice. “
Albert Einstein
Imagine you’re at the blackjack table, you’re dealt a 10 and a 7 (hard 17), and the dealer shows a 10. This is a bad situation, but the odds say you stand and hope the dealer busts. You are about to do just that, but the drunk guy on your right says, “Hit, bro, I’m feeling a four.”
What are the odds this is good advice?
I’m too lazy to look up the exact odds, but let’s just invent some very rough approximations to illustrate the point.
Assume there’s a 50 percent chance you will lose no matter what you do, i.e., it makes no difference whether you hit or stand. That means there’s a 50 percent chance you win (or push) assuming you make the right choice. Let’s further assume if you stand, there’s a 30 percent chance you win (or push), and if you hit there’s a 20 percent chance you win or push. Again, please don’t quibble about the true odds in this situation, as it’s unimportant for the example.
With these odds, what are the chances hitting on hard 17 against a 10 is good advice?
Let’s make it multiple choice:
A) 20% (That’s your chance to win)
B) 0% (It’s always wrong to hit when your odds of winning are better if you stand)
C) 40% (To the extent your decision matters, it’s 2 out of 5 (20% out of 50%) that hitting will win you the hand.)
In my view, A) is the simple response, and it’s not terrible. It recognizes there’s some chance hitting in that situation gives you the desired outcome even if it’s not the optimal choice. It’s probabilistic thinking, which is the correct approach in games of chance, but just slightly misapplied.
B) is the worst response in my opinion. It’s refusing to apply probabilistic thinking in service of a dogma about probabilistic thinking! “It’s always wrong to hit in that situation” is an absolutist position when of course hitting sometimes yields the desired result. The “but my process was good” adherents love B. It signals their devotion to the rule behind the decision (the process) and avoids addressing the likelihood of whether the decision itself pans out in reality.
C) This is in my view the correct answer. To the extent your decision matters, there’s a 40 percent chance hitting will improve your outcome and a 60 percent chance it will not. I’m not looking for self-help or a remedial grade-school probability class. I don’t need to remind myself of what the best process is — I simply want to know my odds of winning money on this particular hand.
Let’s say it’s a low-stakes hand, you’re drunk, you take the advice for laughs, the next card is in fact a four, you have 21, the dealer flips over a 10 and has 20. You win, and you would have lost had you stayed. You high-five the drunk. What are the odds now you got good advice?
A) 0% — bad process!
B) 40% — same odds, don’t be result-oriented!
C) 100% — you already won!
The answer is obviously C. The point of the game is to win money, and taking his advice* on that particular hand did just that for you. You are not obligated to use that heuristic ever again. This isn’t a self-improvement seminar about creating better, more sustainable habits. You won the money, and now that you know what the next card actually was, it would be pathological to go back in time and not take his advice.
*You might think this is just a semantic argument — what we mean by “good advice” depends on whether it’s applied generally or specifically, and that is the distinction, but as we will see below the conflation of the general with the specific is itself the heart of the problem.
I use blackjack as the example because, assuming an infinite shoe (no card counting), the values of each card in each situation are well known. Coins, dice and cards are where simple probabilistic thinking functions best, where the assumptions we make are reliable and fixed. But even deep in this territory, it’s simple to illustrate how a focus on process is not a magic cloak by which one can hide from real-life results. If you lose the money, you lose the money. The casino does not allow you to plead “but my process was good!” to get a refund.
Of course, this one-off example aside, in the territory of cards and coins, the process of choosing the highest winning probability action, applied over time, will yield better results than listening to the drunk. But when we move out of this territory into complex systems, then the “but my process was good” plea is no longer simply falling on the deaf ears of the pit boss when reality goes against you, but those of the warden at the insane asylum.
. . .
I’ve encountered a similar strain of thinking on NFL analytics Twitter too. Sports analytics involve probabilistic thinking, but as leagues and teams are complex systems, it’s hardly as simple as coins and cards.
When Giants GM Dave Gettleman passed on Sam Darnold — the highest rated quarterback remaining on many boards — at pick 2 in the 2018 draft, and took generational running back prospect Saquon Barkley instead, the process people were aghast. How could you take a low-value position like running back over the highest value one when your team needs a quarterback? You always take the quarterback!
As a Giants fan, I was happy with the pick. My view was the same as Gettleman’s in this instance when he said something to the effect of, “If you have to talk yourself into a particular quarterback at that pick, pass.” His point was that he’d have taken a quarterback he liked there obviously, but he didn’t like the remaining ones, so he went elsewhere.
Now people were especially aghast he took a running back rather than say Bradley Chubb, an edge rusher, or Denzel Ward a cornerback, two typically higher value positions than running back, and those would have been good picks too, as both players have been productive in the NFL**. But Barkley has been as advertised when healthy, despite playing with a substandard supporting cast his entire career. He’s a good player, though a star running back obviously won’t singlehandedly transform a franchise the way a star quarterback might.
** The optimal pick would have been quarterback Josh Allen who went at No. 7, but was considered a reach by many in the analytics community because he was a raw prospect with physical skills, but lacked sufficient college production.
But how did the process choice, Sam Darnold, do? He was a disaster for the Jets for three seasons, destroying any hope they might have had at competing, though they salvaged some value by dealing him to the Panthers for picks. He’s also been bad on the Panthers, to date. So did Gettleman make a good choice, drafting one of the consensus top running back prospects of all time over the not-especially-impressive top-two quarterback prospect in that particular class? By any sane account, he did. He landed the better player, and while Chubb and Ward would have been fine, so was Barkley.
But despite Darnold’s failure as a prospect, the process people won’t take the L! They insist even though Barkley is a good running back, and Darnold a terrible quarterback, their process in preferring Darnold was good! But I don’t care about your process! The quality of your process will be adjudicated by your long-term results, the way a religious person’s moral character will be judged by his God. I have no interest in your attestations and signaling, no inclination to evaluate your lifelong body of work. We were simply debating this one particular pick, the results of which are already obvious to anyone not steeped in this bizarre worldview wherein you can claim vindication over reality a priori via believing you’ve found the right heuristic!
There are two claims they are making implicitly here: (1) That if quarterbacks are more valuable generally, you should always take the best available quarterback over the best available running back, irrespective of the particular human beings at issue; and (2) That no matter what the results were, you would be correct to have taken the quarterback.
Claim (1) is the notion that if something is generally — or on average — the case, the specifics need not be taken into account, i.e., they see players as having static values, based on their positions, like cards in blackjack. Claim (2) is the idea that only the heuristic should be evaluated, never the outcome. Taken together they are saying, always do what is generically most probable, ignore details and specifics and have zero regard for how any particular decision pans out in reality. In my view, this is pathological, but that’s okay, it’s only an argument about football draft analytics!
. . .
Our public health response to the covid pandemic increasingly appears to be a disaster. From lockdowns, to school closures, to vaccine mandates to discouraging people from getting early treatment, it has cost many lives, harmed children, wreaked havoc on the economy and done serious damage to our trust in institutions. Many of those who were skeptical of the narrative — for which they were slandered, fired from jobs and deplatformed — have proven prescient and wise.
While some holdouts pretend the covid measures were largely successful, most people — even those once in favor of the measures — now acknowledge the reality: the authors of The Great Barrington Declaration, which advocated for protecting only the vulnerable and not disrupting all of society, were correct. But I am starting to see the same demented logic that declared Darnold a better pick than Barkley emerge even in regard to our covid response.
Here’s a clip of Sam Harris insisting that even though Covid wasn’t as deadly as he had thought, he would have been right if it had been more deadly. (Jimmy Dore does a great job of highlighting the pathology):
Harris is arguing that even if the outcome of our response has been catastrophic, that’s just results-oriented thinking! He still believes he was staying on 17, so to speak, that he made the right call probabilistically and that he was simply unlucky. (Never mind that locking down healthy people was never part of our traditional pandemic response playbook, and coerced medicine is in plain violation of the Nuremberg Code, i.e., he wasn’t advocating for blackjack by the book, and never mind that in highly complex systems no one can calculate the true odds the way you can for a casino game or even an NFL draft pick.)
But Sam Harris was far from the only one. Here’s Dilbert creator Scott Adams explaining why even though he made a mistake in taking the mRNA shot, his process was not to blame:
He’s not defending his process as strongly as Harris, and he appeared to walk this back in this video, but the sentiment is largely the same: There was nothing wrong with my process, I just got unlucky, and others got lucky.
This is not an NFL analytics argument anymore — it’s a worldview shared by policymakers and powerful actors whose decisions have major consequences for human beings around the world. They seem to believe that as long as they come up with the correct heuristic (according to their own estimations and modeled after simplistic games of chance where it can be known in advance what heuristic was indeed better), whatever they do is justified. If reality doesn’t go the way they had expected, they still believe they acted correctly because when they simulated 100 pandemics their approach was optimal in 57 of them!
But the notion that someone with the correct heuristics, i.e., the proper model or framework for viewing the world as a game of dice, is a priori infallible is not only absurd, it’s perilous. Misplaced confidence, unwarranted certainty and being surrounded by peers who believe as you do that no matter what happens in reality, you can do no wrong incentivizes catastrophic risks no sane person would take if he had to bear the consequences of his misjudgment.
What started as a life hack to achieve better long term results — “focus on process, not outcomes” — has now become a religion of sorts, but not the kind that leads to tolerance, peace and reverence for creation, but the opposite. It’s a hubristic cult that takes as its God naive linear thinking, over-simplified probabilistic modeling and square-peg-round-holes it into complex domains without conscience.
If only this style of thinking were confined to a few aberrant psychopaths, we might laugh and hope none of them become the next David Koresh. Unfortunately this mode of understanding and acting on the world is the predominant one, and we see its pathology play out at scale virtually everywhere we look.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A violência é uma forma de comunicação
A violência é uma forma de comunicação: um serial killer, um pai que bate no filho, uma briga de torcidas, uma sessão de tortura, uma guerra, um assassinato passional, uma briga de bar. Em todos esses se pode enxergar uma mensagem que está tentando ser transmitida, que não foi compreendida pelo outro lado, que não pôde ser expressa, e, quando o transmissor da mensagem sentiu que não podia ser totalmente compreendido em palavras, usou essa outra forma de comunicação.
Quando uma ofensa em um bar descamba para uma briga, por exemplo, o que há é claramente uma tentativa de uma ofensa maior ainda pelo lado do que iniciou a primeira, a briga não teria acontecido se ele a tivesse conseguido expressar em palavras tão claras que toda a audiência de bêbados compreendesse, o que estaria além dos limites da linguagem, naquele caso, o soco com o mão direita foi mais eficiente. Poderia ser também a defesa argumentativa: "eu não sou um covarde como você está dizendo" -- mas o bar não acreditaria nessa frase solta, a comunicação não teria obtido o sucesso desejado.
A explicação para o fato da redução da violência à medida em que houve progresso da civilização está na melhora da eficiência da comunicação humana: a escrita, o refinamento da expressão lingüística, o aumento do alcance da palavra falada com rádio, a televisão e a internet.
Se essa eficiência diminuir, porque não há mais acordo quanto ao significado das palavras, porque as pessoas não estão nem aí para se o que escrevem é bom ou não, ou porque são incapazes de compreender qualquer coisa, deve aumentar proporcionalmente a violência.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Problemas com Russell Kirk
A idéia central da “política da prudência[^1]” de Russell Kirk me parece muito correta, embora tenha sido melhor formulada pior no seu enorme livro do que em uma pequena frase do joanadarquista Lucas Souza: “o conservadorismo é importante, porque tem muita gente com idéia errada por aí, e nós podemos não saber distingüi-las”.
Porém, há alguns problemas que precisam ser esclarecidos, ou melhor explicados, e que me impedem de enxergar os seus argumentos como refutação final do meu já tão humilde (embora feroz) anarquismo. São eles:
I Percebo alguma coisa errada, não sei bem onde, entre a afirmação de que toda ideologia é ruim, ou “todas as ideologias causam confusão[^2]”, e a proposta conservadora de “conservar o mundo da ordem que herdamos, ainda que em estado imperfeito, de nossos ancestrais[^3]”. Ora, sem precisar cair em exemplos como o do partido conservador inglês -- que conservava a política inglesa sempre onde estava, e se alternava no governo com o partido trabalhista, que a levava cada vez mais um pouco à esquerda --, está embutida nessa frase, talvez, a idéia, que ao mesmo tempo é clara e ferrenhamente combatida pelos próprios conservadores, de que a história é da humanidade é uma história de progresso linear rumo a uma situação melhor.
Querer conservar o mundo da ordem que herdamos significa conservar também os vários erros que podem ter sido cometidos pelos nossos ancestrais mais recentes, e conservá-los mesmo assim, acusando toda e qualquer tentativa de propôr soluções a esses erros de ideologia? Ou será que conservar o mundo da ordem é escolher um período determinado que seja tido como o auge da história humana e tentar restaurá-lo em nosso próprio tempo? Não seria isto ideologia?
Ou, ainda, será que conservar o mundo da ordem é selecionar, entre vários períodos do passado, alguns pedaços que o conservador considerar ótimos em cada sociedade, fazer dali uma mistura de sociedade ideal baseada no passado e então tentar implementá-la? Quem saberia dizer quais são as partes certas?
II Sobre a questão do que mantém a sociedade civil coesa, Russell Kirk, opondo-a à posição libertária de que o nexo da sociedade é o autointeresse, declara que a posição conservadora é a de que “a sociedade é uma comunidade de almas, que une os mortos, os vivos e os ainda não nascidos, e que se harmoniza por aquilo que Aristóteles chamou de amizade e os cristãos chamam de caridade ou amor ao próximo”.
Esta é uma posição muito correta, mas me parece estar em contradição com a defesa do Estado que ele faz na mesma página e na seguinte. O que me parece errado é que a sociedade não pode ser, ao mesmo tempo, uma “comunidade baseada no amor ao próximo” e uma comunidade que “requer não somente que as paixões dos indivíduos sejam subjugadas, mas que, mesmo no povo e no corpo social, bem como nos indivíduos, as inclinações dos homens, amiúde, devam ser frustradas, a vontade controlada e as paixões subjugadas” e, pior, que “isso somente pode ser feito por um poder exterior”.
Disto aí podemos tirar que, da mesma forma que Kirk define a posição libertária como sendo a de que o autointeresse é que mantém a sociedade civil coesa, a posição conservadora seria então a de que essa coesão vem apenas do Estado, e não de qualquer ligação entre vivos e mortos, ou do amor ao próximo. Já que, sem o Estado, diz, ele, citando Thomas Hobbes, a condição do homem é “solitária, pobre, sórdida, embrutecida e curta”?
[^1]: este é o nome do livro e também um outro nome que ele dá para o próprio conservadorismo (p.99). [^2]: p. 101 [^3]: p. 102
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A command line utility to create and manage personal graphs, then write them to dot and make images with graphviz.
It manages a bunch of YAML files, one for each entity in the graph. Each file lists the incoming and outgoing links it has (could have listen only the outgoing, now that I'm tihnking about it).
Each run of the tool lets you select from existing nodes or add new ones to generate a single link type from one to one, one to many, many to one or many to many -- then updates the YAML files accordingly.
It also includes a command that generates graphs with graphviz, and it can accept a template file that lets you customize the
dot
that is generated and thus the graphviz graph.rel
-
@ 58f5a230:304d59f7
2024-01-20 18:03:12บทความชุดนี้ผมคิดว่าจะเขียนแยกเป็นตอน ๆ ตามบทเรียนที่ได้รับจาก Bitcoin FOSS Program ของทาง Chaincode Labs โดยจะมาแชร์การแก้โจทย์ปัญหาตามบททดสอบในแต่ละสัปดาห์
สัปดาห์แรกนี้ผมได้โจยท์มาทั้งหมด 8 ข้อ และการตอบปัญหาทั้งหมดจะใช้ Bash Script เขียนคำสั่งร่วมกับ
bitcoin-cli
ในการทำความเข้าใจพื้นฐานของ Bitcoin-Core ระบบการบันทึกบัญชีลง Blockchain การดู/ตรวจสอบ ข้อมูลบน Block รวมถึงพื้นฐานข้อมูลภายใน Block จนถึง Transaction เบื้ิองต้น และในบทความนี้จะควบคุมความรู้ในหนังสือนั้นในบทที่ 1-3 ของหนังสือ Mastering Bitcoin หากท่านต้องการศึกษาเพิ่มเติมให้เข้าใจมากขึ้น แนะนำให้อ่านไปด้วยข้อที่ 1: แฮชของบล็อคที่ 654,321 คืออะไร?
ข้อนี้ง่ายมาก ๆ โดยเราจะใช้โปรแกรม
bitcoin-cli
จาก bitcoin-core ที่ติดตั้งไว้แล้ว เชื่อมไปยัง RPC server ที่เป็น Full-node ของเรา พร้อมกับคำสั่งgetblockhash
เราสามารถดูได้ว่ามันใช้งานยังไงด้วยการพิมพ์help
นำหน้าคำสั่ง เราก็จะได้คำอธิบายพร้อมกับตัวอย่างการใช่งานมา```sh $ bitcoin-cli help getblockhash getblockhash height
Returns hash of block in best-block-chain at height provided.
Arguments: 1. height (numeric, required) The height index
Result: "hex" (string) The block hash
Examples:
bitcoin-cli getblockhash 1000 curl --user myusername --data-binary '{"jsonrpc": "1.0", "id": "curltest", "method": "getblockhash", "params": [1000]}' -H 'content-type: text/plain;' http://127.0.0.1:8332/ ```
ในโจทย์นี้เราจะใช้เพียงคำสั่งเดียวเท่านั้น มาดูกัน
sh $ bitcoin-cli getblockhash 654321 000000000000000000058452bbe379ad4364fe8fda68c45e299979b492858095
ผมได้เรียกใช้
bitcoin-cli
พร้อมกับคำสั่งgetblockhash 654321
และได้คำตอบมาว่า000000000000000000058452bbe379ad4364fe8fda68c45e299979b492858095
นั้นคือแฮชของบล็อคที่ 654,321 นั่นเอง ข้อมูลเหล่านี้เราจะได้ใช้มันในข้อหลัง ๆ ไปข้อต่อไปกันข้อที่ 2: จงพิสูจน์ข้อความนี้ว่าถูกเซนต์โดยที่อยู่นี้ถูกต้องหรือไม่
(true / false) Verify the signature by this address over this message: address: `1E9YwDtYf9R29ekNAfbV7MvB4LNv7v3fGa` message: `1E9YwDtYf9R29ekNAfbV7MvB4LNv7v3fGa` signature: `HCsBcgB+Wcm8kOGMH8IpNeg0H4gjCrlqwDf/GlSXphZGBYxm0QkKEPhh9DTJRp2IDNUhVr0FhP9qCqo2W0recNM=`
ตามโจทย์นี้อาจจะดูงง ๆ ผมจึงไปค้นใน Docs ของ Bitcoin-Core ดูและพบกับคำสั่ง
verifymessage
มาลองดูกัน```sh $ bitcoin-cli help verifymessage verifymessage "address" "signature" "message"
Verify a signed message.
Arguments: 1. address (string, required) The bitcoin address to use for the signature. 2. signature (string, required) The signature provided by the signer in base 64 encoding (see signmessage). 3. message (string, required) The message that was signed.
Result: true|false (boolean) If the signature is verified or not.
Examples:
Unlock the wallet for 30 seconds
bitcoin-cli walletpassphrase "mypassphrase" 30
Create the signature
bitcoin-cli signmessage "1D1ZrZNe3JUo7ZycKEYQQiQAWd9y54F4XX" "my message"
Verify the signature
bitcoin-cli verifymessage "1D1ZrZNe3JUo7ZycKEYQQiQAWd9y54F4XX" "signature" "my message"
As a JSON-RPC call
curl --user myusername --data-binary '{"jsonrpc": "1.0", "id": "curltest", "method": "verifymessage", "params": ["1D1ZrZNe3JUo7ZycKEYQQiQAWd9y54F4XX", "signature", "my message"]}' -H 'content-type: text/plain;' http://127.0.0.1:8332/ ```
สังเกตุว่าคำสั่งนี้ใช้ 3 ตัวแปรตามที่โจทย์ให้มาเป๊ะ ๆ มาลองใช้ดูกัน
sh address="1E9YwDtYf9R29ekNAfbV7MvB4LNv7v3fGa" message="1E9YwDtYf9R29ekNAfbV7MvB4LNv7v3fGa" signature="HCsBcgB+Wcm8kOGMH8IpNeg0H4gjCrlqwDf/GlSXphZGBYxm0QkKEPhh9DTJRp2IDNUhVr0FhP9qCqo2W0recNM="
เริ่มจากการประกาศตัวแปรไว้ตามฉบับสายผู้ใช้ Linux แล้วก็ลองส่งคำสั่งกันเลย
sh $ bitcoin-cli verifymessage $address $signature $message false
false
... ตอนแรกก็ยังงง ๆ แต่ข้อนี้คำตอบคือ false จริง ๆ นั่นแหละ อาจจะเพราะคนทำโจทย์ลืมดูว่า message มันซ้ำกับ address อยู่ หรือไม่ก็จงใจ ช่างมัน ไปข้อต่อไปกันข้อที่ 3: บล็อคที่ 123,456 มีจำนวน outputs Transaction ทั้งหมดเท่าไหร่?
ข้อนี้จะไปไวหน่อยเพราะว่าเราไม่จำเป็นต้อง loop เพื่อดูข้อมูล Transaction ในบล็อคเพื่อนับเอา outputs เราสามารถใช้คำสั่ง
getblockstats
ได้เลย แล้วใช่jq
แปลงข้อมูลให้เป็น JSON เพื่อให้เราอ่านได้ง่ายขึ้นsh $ bitcoin-cli getblockstats 123456 | jq . { "avgfee": 416666, "avgfeerate": 1261, "avgtxsize": 330, "blockhash": "0000000000002917ed80650c6174aac8dfc46f5fe36480aaef682ff6cd83c3ca", "feerate_percentiles": [ 0, 0, 0, 3861, 3891 ], "height": 123456, "ins": 17, "maxfee": 1000000, "maxfeerate": 3891, "maxtxsize": 618, "medianfee": 0, "mediantime": 1305197900, "mediantxsize": 258, "minfee": 0, "minfeerate": 0, "mintxsize": 257, "outs": 24, "subsidy": 5000000000, "swtotal_size": 0, "swtotal_weight": 0, "swtxs": 0, "time": 1305200806, "total_out": 16550889992, "total_size": 3964, "total_weight": 15856, "totalfee": 5000000, "txs": 13, "utxo_increase": 7, "utxo_increase_actual": 7, "utxo_size_inc": 567, "utxo_size_inc_actual": 567 }
นี่คือข้อมูลเบื้องต้นของบล็อค 123,456 ที่มีรายการ transaction อยู่ 13 รายการ และมี outputs รวม 24 รายการ เราสามารถใช้
jq
แสดงผลเฉพาะข้อมูลที่เราต้องการได้ง่าย ๆ โดยพิมพ์ชื่อข้อมูลที่เราต้องการตามไปหลังจุด.
ข้อนี้สามารถตอบได้เลยsh $ bitcoin-cli getblockstats 123456 | jq .outs 24
ข้อที่ 4: จงหา taproot address ลำดับที่ 100 โดยคำนวนจาก xpub ต่อไปนี้
ตัว extended public key หรือ xpub ที่ผมได้มาคือ
xpub6Cx5tvq6nACSLJdra1A6WjqTo1SgeUZRFqsX5ysEtVBMwhCCRa4kfgFqaT2o1kwL3esB1PsYr3CUdfRZYfLHJunNWUABKftK2NjHUtzDms2
เอาหล่ะ แล้วจะทำยังไงต่อหล่ะเนี่ยแล้วผมก็ไปเจอกับคำสั่งนี้ในที่สุด
deriveaddresses
ว่าแต่มันใช้ยังไงหว่า```sh $ bitcoin-cli help deriveaddresses deriveaddresses "descriptor" ( range )
Derives one or more addresses corresponding to an output descriptor. Examples of output descriptors are: pkh(
) P2PKH outputs for the given pubkey wpkh( ) Native segwit P2PKH outputs for the given pubkey sh(multi( , , ,...)) P2SH-multisig outputs for the given threshold and pubkeys raw( ) Outputs whose scriptPubKey equals the specified hex scripts tr( ,multi_a( , , ,...)) P2TR-multisig outputs for the given threshold and pubkeys In the above,
either refers to a fixed public key in hexadecimal notation, or to an xpub/xprv optionally followed by one or more path elements separated by "/", where "h" represents a hardened child key. For more information on output descriptors, see the documentation in the doc/descriptors.md file. Arguments: 1. descriptor (string, required) The descriptor. 2. range (numeric or array, optional) If a ranged descriptor is used, this specifies the end or the range (in [begin,end] notation) to derive.
Result: [ (json array) "str", (string) the derived addresses ... ]
Examples: First three native segwit receive addresses
bitcoin-cli deriveaddresses "wpkh([d34db33f/84h/0h/0h]xpub6DJ2dNUysrn5Vt36jH2KLBT2i1auw1tTSSomg8PhqNiUtx8QX2SvC9nrHu81fT41fvDUnhMjEzQgXnQjKEu3oaqMSzhSrHMxyyoEAmUHQbY/0/)#cjjspncu" "[0,2]" curl --user myusername --data-binary '{"jsonrpc": "1.0", "id": "curltest", "method": "deriveaddresses", "params": ["wpkh([d34db33f/84h/0h/0h]xpub6DJ2dNUysrn5Vt36jH2KLBT2i1auw1tTSSomg8PhqNiUtx8QX2SvC9nrHu81fT41fvDUnhMjEzQgXnQjKEu3oaqMSzhSrHMxyyoEAmUHQbY/0/)#cjjspncu", "[0,2]"]}' -H 'content-type: text/plain;' http://127.0.0.1:8332/ ```
อื้อหือ ยิ่งงงไปอิ๊กก เอาวะลองดูตามตัวอย่างของ P2TR ละกัน
sh $ bitcoin-cli deriveaddresses "tr(xpub6Cx5tvq6nACSLJdra1A6WjqTo1SgeUZRFqsX5ysEtVBMwhCCRa4kfgFqaT2o1kwL3esB1PsYr3CUdfRZYfLHJunNWUABKftK2NjHUtzDms2)" error code: -5 error message: Missing checksum
อะ...อ้าว ย้อนกลับไปดูตัวอย่าง และอ่าน Docs ดี ๆ จะพบว่าการ deriveaddresses นั้นจะมีรูปแบบอยู่ เช่น
wpkh([d34db33f/84h/0h/0h]xpub6DJ2dNUysrn5Vt36jH2KLBT2i1auw1tTSSomg8PhqNiUtx8QX2SvC9nrHu81fT41fvDUnhMjEzQgXnQjKEu3oaqMSzhSrHMxyyoEAmUHQbY/0/*)#cjjspncu
- wpkh() นั้นคือรูปแบบของการเข้ารหัส ซึ่งมีหลายอย่างให้ใช้ตามวัตถุประสงค์ อย่าง multisig ก็จะเป็นอีกแบบ
- [d34db33f/84h/0h/0h] ส่วนนี้ึคือ fingerprint จาก pubkey หลักก่อนจะคำนวน xpub ซึ่งโจทย์ข้อนี้ไม่มีให้ และหลังจากศึกษามาก็พบว่ามันไม่จำเป็นสำหรับการสร้าง address แบบ basic ง่าย ๆ
- xpub6DJ2dNUysrn5Vt36jH2KLBT2i1auw1tTSSomg8PhqNiUtx8QX2SvC9nrHu81fT41fvDUnhMjEzQgXnQjKEu3oaqMSzhSrHMxyyoEAmUHQbY ส่วนนี้คืิอ extended public key ซึ่งคำนวนมาจาก pubkey หลักที่คำนวนมาจาก private key หรือ seed ของเราอีกที
- /0/_ คือ path สำหรับระยะการคำนวน address โดยให้มองเป็น /เริ่มต้น/สิ้นสุด เช่น /0/99 หมายถึง เราจะคำนวน address จากตำแหน่งที่ 0 ถึงตำแหน่ง 99 ถ้าใช้ _ คือจะคำนวนกี่ที่อยู่ก็ได้
-
cjjspncu คือ checksum ของ descriptor กระเป๋านี้ และสามารถใช้คำสั่ง
getdescriptorinfo
เพื่อดูข้อมูลได้
เอาหล่ะ มาลองกันใหม่ โดยที่ผมจะ derive ตำแหน่งที่ /100 ที่อยู่เดียวเท่านั้น
sh $ bitcoin-cli getdescriptorinfo "tr(xpub6Cx5tvq6nACSLJdra1A6WjqTo1SgeUZRFqsX5ysEtVBMwhCCRa4kfgFqaT2o1kwL3esB1PsYr3CUdfRZYfLHJunNWUABKftK2NjHUtzDms2/100)" { "checksum": "5p2mg7zx", "descriptor": "tr(xpub6Cx5tvq6nACSLJdra1A6WjqTo1SgeUZRFqsX5ysEtVBMwhCCRa4kfgFqaT2o1kwL3esB1PsYr3CUdfRZYfLHJunNWUABKftK2NjHUtzDms2/100)#5p2mg7zx", "hasprivatekeys": false, "isrange": false, "issolvable": true }
ได้แฮะ ลองเอา checksum ที่ได้ไปคำนวนที่อยู่กัน
sh $ bitcoin-cli deriveaddresses "tr(xpub6Cx5tvq6nACSLJdra1A6WjqTo1SgeUZRFqsX5ysEtVBMwhCCRa4kfgFqaT2o1kwL3esB1PsYr3CUdfRZYfLHJunNWUABKftK2NjHUtzDms2/100)#5p2mg7zx" [ "bc1p3yrtpvv6czx63h2sxwmeep8q98h94w4288fc4cvrkqephalydfgszgacf9" ]
หลังจากนั้นผมก็ใช้
jq -r .[0]
เพื่อดึงข้อมูลออกจาก JSON array แล้วส่งคำตอบ ผ่านได้ด้วยดีข้อที่ 5 สร้าง P2SH multisig address ด้วย public keys 4 ชุดจาก inputs ใน transaction นี้
37d966a263350fe747f1c606b159987545844a493dd38d84b070027a895c4517
ไหนดูซิ transaction นี้เป็นยังไง
sh $ bitcoin-cli getrawtransaction "37d966a263350fe747f1c606b159987545844a493dd38d84b070027a895c4517" 1 { "blockhash": "000000000000000000024a848a9451143278f60e4c3e73003da60c7b0ef74b62", "blocktime": 1701158269, "confirmations": 7751, "hash": "e28a0885b6f413e24a89e9c2bac74d4c6f335e17545f0b860da9146caf7ffe39", "hex": "02000000000104b5f641e80e9065f09b12f3e373072518885d1bd1ddd9298e5b9840de515edac90000000000feffffffd54f8986afbb6ff18572acaee58fa3ad64446dd770ffe9b6a04f798becdafb440000 000000feffffff475e3062b1c3ee87544c29d723866da2b65a1b1a42e6ea4a4fd48d79f83c26c50000000000feffffffa56387352ecc93dfd37648e6ebd4d9effb37ffefcad02eb7b85860c9097cf8090000000000feff ffff02fa440f00000000001600148070ec3954ecdcbfc210d0117e8d28a19eb8467270947d0000000000160014b5fe46c647353ec9c06374655502094095f0289c0247304402200dd758801b40393f68dad8ab57558803 efcd2b681ee31eb44fb3cfa9666d2bf90220254d34fa4990e23652bf669053c5e16fd2fbb816bed2eeb44c1f1e6e54143e3e012102bbb4ba3f39b5f3258f0014d5e4eab5a6990009e3e1dba6e8eaff10b3832394f70247 304402201694761a5749b6a84f71459c04a44cf9d34a36ae8c9044c3af7a3a5514ef2e64022058f61feb92d6d54b71fdea47e7dfcd20f6a5c12e2fbcb15bc44fe95c73f2e808012103aaf17b1a7b4108f7e5bc4f7d59c2 0f7fb1a72dbc74a9a3d6d1f8488df159c76002473044022014b65c60f65e62d9dac893e404c8de2a007c7c6b74dbac18e454d8374e159759022012453f69112adadf9495fd3fe288aa5ed9e3d836340da06fa1e82c8e09 adef57012103a6d919c76d9117c23570a767450013edf31cf6be7d3b5a881c06a9aa1f2c24ce0247304402203d3b02390803c1d673fa49bd64d4a26fbeb29e3fc152af8f844d776c9409e41302206903a011a04e00a7f4 ec606da4320226d2d393f565702cc58cfcef6dca67f84c01210383d12258e3e294a6d7754336f6b4baef992ec4b91694d3460bcb022b11da8cd2817e0c00", "locktime": 818817, "size": 666, "time": 1701158269, "txid": "37d966a263350fe747f1c606b159987545844a493dd38d84b070027a895c4517", "version": 2, "vin": [ { "scriptSig": { "asm": "", "hex": "" }, "sequence": 4294967294, "txid": "c9da5e51de40985b8e29d9ddd11b5d8818250773e3f3129bf065900ee841f6b5", "txinwitness": [ "304402200dd758801b40393f68dad8ab57558803efcd2b681ee31eb44fb3cfa9666d2bf90220254d34fa4990e23652bf669053c5e16fd2fbb816bed2eeb44c1f1e6e54143e3e01", "02bbb4ba3f39b5f3258f0014d5e4eab5a6990009e3e1dba6e8eaff10b3832394f7" ], "vout": 0 }, { "scriptSig": { "asm": "", "hex": "" }, "sequence": 4294967294, "txid": "44fbdaec8b794fa0b6e9ff70d76d4464ada38fe5aeac7285f16fbbaf86894fd5", "txinwitness": [ "304402201694761a5749b6a84f71459c04a44cf9d34a36ae8c9044c3af7a3a5514ef2e64022058f61feb92d6d54b71fdea47e7dfcd20f6a5c12e2fbcb15bc44fe95c73f2e80801", "03aaf17b1a7b4108f7e5bc4f7d59c20f7fb1a72dbc74a9a3d6d1f8488df159c760" ], "vout": 0 }, { "scriptSig": { "asm": "", "hex": "" }, "sequence": 4294967294, "txid": "c5263cf8798dd44f4aeae6421a1b5ab6a26d8623d7294c5487eec3b162305e47", "txinwitness": [ "3044022014b65c60f65e62d9dac893e404c8de2a007c7c6b74dbac18e454d8374e159759022012453f69112adadf9495fd3fe288aa5ed9e3d836340da06fa1e82c8e09adef5701", "03a6d919c76d9117c23570a767450013edf31cf6be7d3b5a881c06a9aa1f2c24ce" ], "vout": 0 }, { "scriptSig": { "asm": "", "hex": "" }, "sequence": 4294967294, "txid": "09f87c09c96058b8b72ed0caefff37fbefd9d4ebe64876d3df93cc2e358763a5", "txinwitness": [ "304402203d3b02390803c1d673fa49bd64d4a26fbeb29e3fc152af8f844d776c9409e41302206903a011a04e00a7f4ec606da4320226d2d393f565702cc58cfcef6dca67f84c01", "0383d12258e3e294a6d7754336f6b4baef992ec4b91694d3460bcb022b11da8cd2" ], "vout": 0 } ], "vout": [ { "n": 0, "scriptPubKey": { "address": "bc1qspcwcw25anwtlsss6qgharfg5x0ts3njad8uve", "asm": "0 8070ec3954ecdcbfc210d0117e8d28a19eb84672", "desc": "addr(bc1qspcwcw25anwtlsss6qgharfg5x0ts3njad8uve)#pzjnvw8p", "hex": "00148070ec3954ecdcbfc210d0117e8d28a19eb84672", "type": "witness_v0_keyhash" }, "value": 0.01000698 }, { "n": 1, "scriptPubKey": { "address": "bc1qkhlyd3j8x5lvnsrrw3j42qsfgz2lq2yu3cs5lr", "asm": "0 b5fe46c647353ec9c06374655502094095f0289c", "desc": "addr(bc1qkhlyd3j8x5lvnsrrw3j42qsfgz2lq2yu3cs5lr)#hzcalwww", "hex": "0014b5fe46c647353ec9c06374655502094095f0289c", "type": "witness_v0_keyhash" }, "value": 0.0823 } ], "vsize": 344, "weight": 1374 }
เราจำเป็นต้องเรียนรู้เรื่อง Witness program ของ bip-141 เพื่อเข้าใจ scriptPubKey หรือ redeemScript เบื่องต้นเสียก่อน โดยพื้นฐานธุรกรรมแบบ P2WPKH ภายใน txinwitness จะมี signature และ public keys ตามลำดับ เราจะลองใช้ pubkey นี้ในการสร้างกระเป๋า multisig กัน
sh txinfo=$(bitcoin-cli getrawtransaction "37d966a263350fe747f1c606b159987545844a493dd38d84b070027a895c4517" 1) ad1=$(echo $txinfo | jq '.vin[0] | .txinwitness[1]') ad2=$(echo $txinfo | jq '.vin[1] | .txinwitness[1]') ad3=$(echo $txinfo | jq '.vin[2] | .txinwitness[1]') ad4=$(echo $txinfo | jq '.vin[3] | .txinwitness[1]') bitcoin-cli createmultisig 1 ["$ad1","$ad2","$ad3","$ad4"] | jq -r '.address'
3GyWg1CCD3RDpbwCbuk9TTRQptkRfczDz8
ง่ายเลยข้อนี้ ไปข้อต่อไปกัน
ข้อที่ 6: transaction ไหนในบล็อค 257,343 ใช้เงินรางวัลจากการขุดจากบล็อค 256,128?
Which tx in block 257,343 spends the coinbase output of block 256,128?
ข้อนี้ต้องไปหาว่า coinbase output ก็คือเงินรางวัลจากการขุดบล็อคนั้น ๆ รวมกับค่า fee นั่นเอง ซึ่งจะอยู่ในลำดับแรกของบล็อคนั้น ๆ เสมอ เรามาเขียนน Bash Script หา coinbase txid กันsh blockhash=$(bitcoin-cli getblockhash 256128) tx256=$(bitcoin-cli getblock $blockhash 2)
ด้วยคำสั่ง
getblock ตามด้วยแฮชของบล็อค และระดับข้อมูล
โดยที่ระดับ- จะแสดงข้อมูลบล็อค ไม่มี transaction
- จะแสดงข้อมูล transaction แต่ไม่รวม inputs
- จะแสดงข้อมูลทั้งหมดของบล็อคนั้น ๆ
sh coinbase_txid=$(echo $tx256 | jq -r '.tx[0].txid') echo $coinbase_txid
แล้วก็เลือก txid จากข้อมูลแรกมา ซึ่งก็คือ coinbase output ของเรา
611c5a0972d28e421a2308cb2a2adb8f369bb003b96eb04a3ec781bf295b74bc นี่คือ txid ที่เราจะเอาไปหาว่ามันมีใน inputs ไหนของ transaction ไหนใน block 257,343 ซึ่งโดยทั่วไปแล้วหากเรา loop หากทีละ transaction คงเสียเวลาน่าดู เราลองมาใช้ฟังชั่น select() ของ
jq
กัน```sh blockhash=$(bitcoin-cli getblockhash 256128) tx256=$(bitcoin-cli getblock $blockhash 2) coinbase_txid=$(echo $tx256 | jq -r '.tx[0].txid') blockhash=$(bitcoin-cli getblockhash 257343) tx257=$(bitcoin-cli getblock $blockhash 3)
เลือกข้อมูล transaction
block257tx=$(echo $tx257 | jq -r '.tx')
ใน .tx นั้นเราจะได้ JSON array ที่มีรายการ transaction เยอะมาก ๆ เราจะเลือกอันเดียวที่มี coinbase txid ใน vin หรือ inputs นั้น ๆ กัน และใช้ jq อีกครั้งเพื่อให้แสดงผลแค่ txid
echo "$block257tx" | jq ".[] | select(.vin[].txid==\"$coinbase_txid\")" | jq -r '.txid' ```
และนี่คือคำตอบของข้อนี้
c54714cb1373c2e3725261fe201f267280e21350bdf2df505da8483a6a4805fc
ข้อที่ 7: มี UTXO อันนึงที่ไม่เคยถูกใช้งานเลยในบล็อคที่ 123,321 UTXO นั้นคือ address อะไร?
Only one single output remains unspent from block 123,321. What address was it sent to?
ข้อนี้เราจะใช้คำสั่ง gettxout ที่จะ return ข้อมูลของ UTXO ที่ไม่เคยถูกใช้งานให้เรา โดยการ loop ไปทีละ transaction
```sh blockhash=$(bitcoin-cli getblockhash 123321) blockinfo=$(bitcoin-cli getblock $blockhash 3) transaction=$(echo $blockinfo | jq '.tx[]') txid=$(echo $transaction | jq -r '.txid')
for item in $txid; do bitcoin-cli gettxout "$item" 0 | jq -r '.scriptPubKey.address' done ```
1FPDNNmgwEnKuF7GQzSqUcVQdzSRhz4pgX ได้มาแล้วคำตอบของเรา โจทย์ข้อนี้คงผิดแน่ ๆ หากมี UTXO ที่ยังไม่ได้ใช้งานมากกว่า 1 อันเพราะเราสั่งให้แสดงมันทุก transaction เลย! ฮาาา
ข้อที่ 8: public key อะไรที่ใช้เซ็นอันดับแรกใน transaction e5969add849689854ac7f28e45628b89f7454b83e9699e551ce14b6f90c86163
ข้อนี้ค่อนข้างหินเลย ตอนแรกเอาไปเปิดในดูใน mempool พบว่าเป็นธุรกรรมที่ถูก force close lightning channel ซึ่งมันต้องเป็น multisig แน่ ๆ เอาหล่ะ ดูข้อมูลธุรกรรมนี้ก่อนแล้วกัน
sh bitcoin-cli getrawtransaction "e5969add849689854ac7f28e45628b89f7454b83e9699e551ce14b6f90c86163" 1 { "blockhash": "0000000000000000000b0e5eec04d784347ef564b3ddb939eca019a66c9cedbe", "blocktime": 1610254919, "confirmations": 161208, "hash": "881d7ab9ad60d6658283dbbad345f6f28491a264cd11d060b4fb4f121851a7f3", "hex": "020000000001018b1aab3917e6595816c63bf9dd0ebf4303f2b2a23103aee1500282c944affd71000000000000000000010e26000000000000160014c47082b5a49065d85ab65730e8c28bb0b4810b960347 3044022050b45d29a3f2cf098ad0514dff940c78046c377a7e925ded074ad927363dc2dd02207c8a8ca7d099483cf3b50b00366ad2e2771805d6be900097c2c57bc58b4f34a50101014d6321025d524ac7ec6501d018d3 22334f142c7c11aa24b9cffec03161eca35a1e32a71f67029000b2752102ad92d02b7061f520ebb60e932f9743a43fee1db87d2feb1398bf037b3f119fc268ac00000000", "locktime": 0, "size": 237, "time": 1610254919, "txid": "e5969add849689854ac7f28e45628b89f7454b83e9699e551ce14b6f90c86163", "version": 2, "vin": [ { "scriptSig": { "asm": "", "hex": "" }, "sequence": 0, "txid": "71fdaf44c9820250e1ae0331a2b2f20343bf0eddf93bc6165859e61739ab1a8b", "txinwitness": [ "3044022050b45d29a3f2cf098ad0514dff940c78046c377a7e925ded074ad927363dc2dd02207c8a8ca7d099483cf3b50b00366ad2e2771805d6be900097c2c57bc58b4f34a501", "01", "6321025d524ac7ec6501d018d322334f142c7c11aa24b9cffec03161eca35a1e32a71f67029000b2752102ad92d02b7061f520ebb60e932f9743a43fee1db87d2feb1398bf037b3f119fc268ac" ], "vout": 0 } ], "vout": [ { "n": 0, "scriptPubKey": { "address": "bc1qc3cg9ddyjpjask4k2ucw3s5tkz6gzzukzmg49s", "asm": "0 c47082b5a49065d85ab65730e8c28bb0b4810b96", "desc": "addr(bc1qc3cg9ddyjpjask4k2ucw3s5tkz6gzzukzmg49s)#c68e8rrv", "hex": "0014c47082b5a49065d85ab65730e8c28bb0b4810b96", "type": "witness_v0_keyhash" }, "value": 9.742e-05 } ], "vsize": 121, "weight": 483 }
เรารู้แล้วว่าข้อมูลจะอยู่ใน
txinwitness
ซึ่งอันดับแรก ๆ เป็น signature และอันหลังเป็น public key แต่ว่า มันมีหลาย public key ใช่มะ ในนี้sh transaction=$(bitcoin-cli getrawtransaction "e5969add849689854ac7f28e45628b89f7454b83e9699e551ce14b6f90c86163" 1) scriptpubkey=$(echo $txinfo | jq -r .vin[].txinwitness[2]) echo $scriptpubkey
6321025d524ac7ec6501d018d322334f142c7c11aa24b9cffec03161eca35a1e32a71f67029000b2752102ad92d02b7061f520ebb60e932f9743a43fee1db87d2feb1398bf037b3f119fc268ac เอาหล่ะ เรามาแกะข้อมูลนี้กัน หากเราไปอ่าน bip-143 จะมีรูปแบบตัวอย่างลำดับอยู่ และก็พบว่ามันคืออักษรลำดับที่ 5 ถึง 67 เราต้องใช้ Bash slicing string เพื่อตัดให้เหลือส่วนที่เราต้องการและส่งข้อสอบดู
sh echo ${scriptpubkey:4:66}
025d524ac7ec6501d018d322334f142c7c11aa24b9cffec03161eca35a1e32a71f
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28GraphQL vs REST
Today I saw this: https://github.com/stickfigure/blog/wiki/How-to-(and-how-not-to)-design-REST-APIs
And it reminded me why GraphQL is so much better.
It has also reminded me why HTTP is so confusing and awful as a protocol, especially as a protocol for structured data APIs, with all its status codes and headers and bodies and querystrings and content-types -- but let's not talk about that for now.
People complain about GraphQL being great for frontend developers and bad for backend developers, but I don't know who are these people that apparently love reading guides like the one above of how to properly construct ad-hoc path routers, decide how to properly build the JSON, what to include and in which circumstance, what status codes and headers to use, all without having any idea of what the frontend or the API consumer will want to do with their data.
It is a much less stressful environment that one in which we can just actually perform the task and fit the data in a preexistent schema with types and a structure that we don't have to decide again and again while anticipating with very incomplete knowledge the usage of an extraneous person -- i.e., an environment with GraphQL, or something like GraphQL.
By the way, I know there are some people that say that these HTTP JSON APIs are not the real REST, but that is irrelevant for now.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Parallel Chains
We want merged-mined blockchains. We want them because it is possible to do things in them that aren't doable in the normal Bitcoin blockchain because it is rightfully too expensive, but there are other things beside the world money that could benefit from a "distributed ledger" -- just like people believed in 2013 --, like issued assets and domain names (just the most obvious examples).
On the other hand we can't have -- like people believed in 2013 -- a copy of Bitcoin for every little idea with its own native token that is mined by proof-of-work and must get off the ground from being completely valueless into having some value by way of a miracle that operated only once with Bitcoin.
It's also not a good idea to have blockchains with custom merged-mining protocol (like Namecoin and Rootstock) that require Bitcoin miners to run their software and be an active participant and miner for that other network besides Bitcoin, because it's too cumbersome for everybody.
Luckily Ruben Somsen invented this protocol for blind merged-mining that solves the issue above. Although it doesn't solve the fact that each parallel chain still needs some form of "native" token to pay miners -- or it must use another method that doesn't use a native token, such as trusted payments outside the chain.
How does it work
With the
SIGHASH_NOINPUT
/SIGHASH_ANYPREVOUT
soft-fork[^eltoo] it becomes possible to create presigned transactions that aren't related to any previous UTXO.Then you create a long sequence of transactions (sufficient to last for many many years), each with an
nLockTime
of 1 and each spending the next (you create them from the last to the first). Since theirscriptSig
(the unlocking script) will useSIGHASH_ANYPREVOUT
you can obtain a transaction id/hash that doesn't include the previous TXO, you can, for example, in a sequence of transactionsA0-->B
(B spends output 0 from A), include the signature for "spending A0 on B" inside thescriptPubKey
(the locking script) of "A0".With the contraption described above it is possible to make that long string of transactions everybody will know (and know how to generate) but each transaction can only be spent by the next previously decided transaction, no matter what anyone does, and there always must be at least one block of difference between them.
Then you combine it with
RBF
,SIGHASH_SINGLE
andSIGHASH_ANYONECANPAY
so parallel chain miners can add inputs and outputs to be able to compete on fees by including their own outputs and getting change back while at the same time writing a hash of the parallel block in the change output and you get everything working perfectly: everybody trying to spend the same output from the long string, each with a different parallel block hash, only the highest bidder will get the transaction included on the Bitcoin chain and thus only one parallel block will be mined.See also
[^eltoo]: The same thing used in Eltoo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A estrutura lógica do livro didático
Todos os livros didáticos e cursos expõem seus conteúdos a partir de uma organização lógica prévia, um esquema de todo o conteúdo que julgam relevante, tudo muito organizadinho em tópicos e subtópicos segundo a ordem lógica que mais se aproxima da ordem natural das coisas. Imagine um sumário de um manual ou livro didático.
A minha experiência é a de que esse método serve muito bem para ninguém entender nada. A organização lógica perfeita de um campo de conhecimento é o resultado final de um estudo, não o seu início. As pessoas que escrevem esses manuais e dão esses cursos, mesmo quando sabem do que estão falando (um acontecimento aparentemente raro), o fazem a partir do seu próprio ponto de vista, atingido após uma vida de dedicação ao assunto (ou então copiando outros manuais e livros didáticos, o que eu chutaria que é o método mais comum).
Para o neófito, a melhor maneira de entender algo é através de imersões em micro-tópicos, sem muita noção da posição daquele tópico na hierarquia geral da ciência.
- Revista Educativa, um exemplo de como não ensinar nada às crianças.
- Zettelkasten, a ordem surgindo do caos, ao invés de temas se encaixando numa ordem preexistentes.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Boardthreads
This was a very badly done service for turning a Trello list into a helpdesk UI.
Surprisingly, it had more paying users than Websites For Trello, which I was working on simultaneously and dedicating much more time to it.
The Neo4j database I used for this was a very poor choice, it was probably the cause of all the bugs.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Channels without HTLCs
HTLCs below the dust limit are not possible, because they're uneconomical.
So currently whenever a payment below the dust limit is to be made Lightning peers adjust their commitment transactions to pay that amount as fees in case the channel is closed. That's a form of reserving that amount and incentivizing peers to resolve the payment, either successfully (in case it goes to the receiving node's balance) or not (it then goes back to the sender's balance).
SOLUTION
I didn't think too much about if it is possible to do what I think can be done in the current implementation on Lightning channels, but in the context of Eltoo it seems possible.
Eltoo channels have UPDATE transactions that can be published to the blockchain and SETTLEMENT transactions that spend them (after a relative time) to each peer. The barebones script for UPDATE transactions is something like (copied from the paper, because I don't understand these things):
OP_IF # to spend from a settlement transaction (presigned) 10 OP_CSV 2 As,i Bs,i 2 OP_CHECKMULTISIGVERIFY OP_ELSE # to spend from a future update transaction <Si+1> OP_CHECKLOCKTIMEVERIFY 2 Au Bu 2 OP_CHECKMULTISIGVERIFY OP_ENDIF
During a payment of 1 satoshi it could be updated to something like (I'll probably get this thing completely wrong):
OP_HASH256 <payment_hash> OP_EQUAL OP_IF # for B to spend from settlement transaction 1 in case the payment went through # and they have a preimage 10 OP_CSV 2 As,i1 Bs,i1 2 OP_CHECKMULTISIGVERIFY OP_ELSE OP_IF # for A to spend from settlement transaction 2 in case the payment didn't went through # and the other peer is uncooperative <now + 1day> OP_CHECKLOCKTIMEVERIFY 2 As,i2 Bs,i2 2 OP_CHECKMULTISIGVERIFY OP_ELSE # to spend from a future update transaction <Si+1> OP_CHECKLOCKTIMEVERIFY 2 Au Bu 2 OP_CHECKMULTISIGVERIFY OP_ENDIF OP_ENDIF
Then peers would have two presigned SETTLEMENT transactions, 1 and 2 (with different signature pairs, as badly shown in the script). On SETTLEMENT 1, funds are, say, 999sat for A and 1001sat for B, while on SETTLEMENT 2 funds are 1000sat for A and 1000sat for B.
As soon as B gets the preimage from the next peer in the route it can give it to A and them can sign a new UPDATE transaction that replaces the above gimmick with something simpler without hashes involved.
If the preimage doesn't come in viable time, peers can agree to make a new UPDATE transaction anyway. Otherwise A will have to close the channel, which may be bad, but B wasn't a good peer anyway.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How IPFS is broken
I once fell for this talk about "content-addressing". It sounds very nice. You know a certain file exists, you know there are probably people who have it, but you don't know where or if it is hosted on a domain somewhere. With content-addressing you can just say "start" and the download will start. You don't have to care.
Other magic properties that address common frustrations: webpages don't go offline, links don't break, valuable content always finds its way, other people will distribute your website for you, any content can be transmitted easily to people near you without anyone having to rely on third-party centralized servers.
But you know what? Saying a thing is good doesn't automatically make it possible and working. For example: saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.
And what is the solution for that? A DHT!
DHT?
Turns out DHTs have terrible incentive structure (as you would expect, no one wants to hold and serve data they don't care about to others for free) and the IPFS experience proves it doesn't work even in a small network like the IPFS of today.
If you have run an IPFS client you'll notice how much it clogs your computer. Or maybe you don't, if you are very rich and have a really powerful computer, but still, it's not something suitable to be run on the entire world, and on web pages, and servers, and mobile devices. I imagine there may be a lot of unoptimized code and technical debt responsible for these and other problems, but the DHT is certainly the biggest part of it. IPFS can open up to 1000 connections by default and suck up all your bandwidth -- and that's just for exchanging keys with other DHT peers.
Even if you're in the "client" mode and limit your connections you'll still get overwhelmed by connections that do stuff I don't understand -- and it makes no sense to run an IPFS node as a client, that defeats the entire purpose of making every person host files they have and content-addressability in general, centralizes the network and brings back the dichotomy client/server that IPFS was created to replace.
Connections?
So, DHTs are a fatal flaw for a network that plans to be big and interplanetary. But that's not the only problem.
Finding content on IPFS is the most slow experience ever and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with
scp
-- that's considering that IPFS managed to find the other machine, otherwise your command will just be stuck for days.Now even if you ignore that IPFS objects should be content-addressable and not location-addressable and, knowing which peer has the content you want, you go there and explicitly tell IPFS to connect to the peer directly, maybe you can get some seconds of (slow) download, but then IPFS will drop the connection and the download will stop. Sometimes -- but not always -- it helps to add the peer address to your bootstrap nodes list (but notice this isn't something you should be doing at all).
IPFS Apps?
Now consider the kind of marketing IPFS does: it tells people to build "apps" on IPFS. It sponsors "databases" on top of IPFS. It basically advertises itself as a place where developers can just connect their apps to and all users will automatically be connected to each other, data will be saved somewhere between them all and immediately available, everything will work in a peer-to-peer manner.
Except it doesn't work that way at all. "libp2p", the IPFS library for connecting people, is broken and is rewritten every 6 months, but they keep their beautiful landing pages that say everything works magically and you can just plug it in. I'm not saying they should have everything perfect, but at least they should be honest about what they truly have in place.
It's impossible to connect to other people, after years there's no js-ipfs and go-ipfs interoperability (and yet they advertise there will be python-ipfs, haskell-ipfs, whoknowswhat-ipfs), connections get dropped and many other problems.
So basically all IPFS "apps" out there are just apps that want to connect two peers but can't do it manually because browsers and the IPv4/NAT network don't provide easy ways to do it and WebRTC is hard and requires servers. They have nothing to do with "content-addressing" anything, they are not trying to build "a forest of merkle trees" nor to distribute or archive content so it can be accessed by all. I don't understand why IPFS has changed its core message to this "full-stack p2p network" thing instead of the basic content-addressable idea.
IPNS?
And what about the database stuff? How can you "content-address" a database with values that are supposed to change? Their approach is to just save all values, past and present, and then use new DHT entries to communicate what are the newest value. This is the IPNS thing.
Apparently just after coming up with the idea of content-addressability IPFS folks realized this would never be able to replace the normal internet as no one would even know what kinds of content existed or when some content was updated -- and they didn't want to coexist with the normal internet, they wanted to replace it all because this message is more bold and gets more funding, maybe?
So they invented IPNS, the name system that introduces location-addressability back into the system that was supposed to be only content-addressable.
And how do they manage to do it? Again, DHTs. And does it work? Not really. It's limited, slow, much slower than normal content-addressing fetches, most of the times it doesn't even work after hours. But still although developers will tell it is not working yet the IPFS marketing will talk about it as if it was a thing.
Archiving content?
The main use case I had for IPFS was to store content that I personally cared about and that other people might care too, like old articles from dead websites, and videos, sometimes entire websites before they're taken down.
So I did that. Over many months I've archived stuff on IPFS. The IPFS API and CLI don't make it easy to track where stuff are. The
pin
command doesn't help as it just throws your pinned hash in a sea of hashes and subhashes and you're never able to find again what you have pinned.The IPFS daemon has a fake filesystem that is half-baked in functionality but allows you to locally address things by names in a tree structure. Very hard to update or add new things to it, but still doable. It allows you to give names to hashes, basically. I even began to write a wrapper for it, but suddenly after many weeks of careful content curation and distribution all my entries in the fake filesystem were gone.
Despite not having lost any of the files I did lose everything, as I couldn't find them in the sea of hashes I had in my own computer. After some digging and help from IPFS developers I managed to recover a part of it, but it involved hacks. My things vanished because of a bug at the fake filesystem. The bug was fixed, but soon after I experienced a similar (new) bug. After that I even tried to build a service for hash archival and discovery, but as all the problems listed above began to pile up I eventually gave up. There were also problems of content canonicalization, the code the IPFS daemon use to serve default HTML content over HTTP, problems with the IPFS browser extension and others.
Future-proof?
One of the core advertised features of IPFS was that it made content future-proof. I'm not sure they used this expression, but basically you have content, you hash that, you get an address that never expires for that content, now everybody can refer to the same thing by the same name. Actually, it's better: content is split and hashed in a merkle-tree, so there's fine-grained deduplication, people can store only chunks of files and when a file is to be downloaded lots of people can serve it at the same time, like torrents.
But then come the protocol upgrades. IPFS has used different kinds of hashing algorithms, different ways to format the hashes, and will change the default algorithm for building the merkle-trees, so basically the same content now has a gigantic number of possible names/addresses, which defeats the entire purpose, and yes, files hashed using different strategies aren't automagically compatible.
Actually, the merkle algorithm could have been changed by each person on a file-by-file basis since the beginning (you could for example split a book file by chapter or page instead of by chunks of bytes) -- although probably no one ever did that. I know it's not easy to come up with the perfect hashing strategy in the first go, but the way these matters are being approached make me wonder that IPFS promoters aren't really worried about future-proof, or maybe we're just in Beta phase forever.
Ethereum?
This is also a big problem. IPFS is built by Ethereum enthusiasts. I can't read the mind of people behind IPFS, but I would imagine they have a poor understanding of incentives like the Ethereum people, and they tend towards scammer-like behavior like getting a ton of funds for investors in exchange for promises they don't know they can fulfill (like Filecoin and IPFS itself) based on half-truths, changing stuff in the middle of the road because some top-managers decided they wanted to change (move fast and break things) and squatting fancy names like "distributed web".
The way they market IPFS (which is not the main thing IPFS was initially designed to do) as a "peer-to-peer cloud" is very seductive for Ethereum developers just like Ethereum itself is: as a place somewhere that will run your code for you so you don't have to host a server or have any responsibility, and then Infura will serve the content to everybody. In the same vein, Infura is also hosting and serving IPFS content for Ethereum developers these days for free. Ironically, just like the Ethereum hoax peer-to-peer money, IPFS peer-to-peer network may begin to work better for end users as things get more and more centralized.
More about IPFS problems:
- IPFS problems: Too much immutability
- IPFS problems: General confusion
- IPFS problems: Shitcoinery
- IPFS problems: Community
- IPFS problems: Pinning
- IPFS problems: Conceit
- IPFS problems: Inefficiency
- IPFS problems: Dynamic links
See also
- A crappy course on torrents, on the protocol that has done most things right
- The Tragedy of IPFS in a series of links, an ongoing Twitter thread.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Criteria for activating Drivechain on Bitcoin
Drivechain is, in essence, just a way to give Bitcoin users the option to deposit their coins in a hashrate escrow. If Bitcoin is about coin ownership, in theory there should be no objection from anyone on users having the option to do that: my keys, my coins etc. In other words: even if you think hashrate escrows are a terrible idea and miners will steal all coins from that, you shouldn't care about what other people do with their own money.
There are only two reasonable objections that could be raised by normal Bitcoin users against Drivechain:
- Drivechain adds code complexity to
bitcoind
- Drivechain perverts miner incentives of the Bitcoin chain
If these two objections can be reasonably answered there remains no reason for not activating the Drivechain soft-fork.
1
To address 1 we can just take a look at the code once it's done (which I haven't) but from my understanding the extra validation steps needed for ensuring hashrate escrows work are very minimal and self-contained, they shouldn't affect anything else and the risks of introducing some catastrophic bug are roughly zero (or the same as the risks of any of the dozens of refactors that happen every week on Bitcoin Core).
For the BMM/BIP-301 part, again the surface is very small, but we arguably do not need that at all, since anyprevout (once that is merged) enables blind merge-mining in way that is probably better than BIP-301, and that soft-fork is also very simple, plus already loved and accepted by most of the Bitcoin community, implemented and reviewed on Bitcoin Inquisition and is live on the official Bitcoin Core signet.
2
To address 2 we must only point that BMM ensures that Bitcoin miners don't have to do any extra work to earn basically all the fees that would come from the sidechain, as competition for mining sidechain blocks would bid the fee paid to Bitcoin miners up to the maximum economical amount. It is irrelevant if there is MEV on the sidechain or not, everything that reaches the Bitcoin chain does that in form of fees paid in a single high-fee transaction paid to any Bitcoin miner, regardless of them knowing about the sidechain or not. Therefore, there are no centralization pressure or pervert mining incentives that can affect Bitcoin land.
Sometimes it's argued that Drivechain may facilitate the ocurrence of a transaction paying a fee so high it would create incentives for reorging the Bitcoin chain. There is no reason to believe Drivechain would make this more likely than an actual attack than anyone can already do today or, as has happened, some rich person typing numbers wrong on his wallet. In fact, if a drivechain is consistently paying high fees on its BMM transactions that is an incentive for Bitcoin miners to keep mining those transactions one after the other and not harm the users of sidechain by reorging Bitcoin.
Moreover, there are many factors that exist today that can be seen as centralization vectors for Bitcoin mining: arguably one of them is non-blind merge mining, of which we have a (very convoluted) example on the Stacks shitcoin, and introducing the possibility of blind merge-mining on Bitcoin would basically remove any reasonable argument for having such schemes, therefore reducing the centralizing factor of them.
- Drivechain adds code complexity to
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A vision for content discovery and relay usage for basic social-networking in Nostr
Or how to make a basic "social-networking" application using the Nostr protocol that is safe and promotes decentralization.
The basic app views
Suppose a basic "social-networking" app is like Twitter. In that, one has basically 3 views:
- A home feed that shows all notes from everybody you follow;
- A profile view from a specific user that shows all notes from that user;
- A replies view that shows all replies to one specific note.
Some Nostr clients may want to also provide another view, the global feed which shows posts from everybody.
A simple classification of relays
And suppose that all existing relays can be classified in 3 groups (according to one's subjective evaluation):
- spammy relays, in which people of any kind can post whatever they want with no filters at all;
- safe relays, in which there are some barriers to entry, like requiring a fee, or requiring some cumbersome user registration process, and spammers or people who post bad things are banned -- but this is still a relay fundamentally open to anyone (although this is also subjective depending on the kind of restrictions);
- closed relays, in which only certain kinds of people enter, for example, members of a group of friends or a closed online community.
How to follow and find posts from a given profile
To follow someone on Nostr, it is necessary to know one or more relays in which that person is publishing their notes, otherwise it is impossible to fetch anything from them.
When a user starts to follow someone, that may be done through 4 different ways:
- from seeing that person in the app
- using an
nprofile
URI - using a NIP-05 address
- using a bare pubkey ('npub`)
Situation 1 may happen when that person is seen in the replies of yours or someone else's post, on a global feed post, or from a note referenced or republished from them by someone else. When that happens, it is expected that the references (in
e
andp
tags) contain relay URLs. We must them inject that information to tentatively associate that person with a relay URL at that first contact.In situations 2 and 3 both the
nprofile
and the NIP-05 addresses should contain a list of preferred relays for that person, so we can bootstrap the relay list for that person based on that.In situation 4 there is no relay list, so we must either prompt the user through an annoying popup or something -- or it can try searching for that pubkey in one of their known relays. This remains an option for the other methods too.
Once we have relay URLs for a given profile we can use these relays to query notes from that pubkey. As time passes that user may migrate to other relays, or it may become known that the user is also posting to other relays. To make sure these things are discovered, we must pay attention to hints sent in tags of all events seen everywhere -- from anyone --, and also events of kind 2 and 3, and upgrade our local database that has the knowledge of relationship between profiles and relays accordingly.
Rendering the app views
From what we've gathered until now, we can easily render the home feed and the profile view. To do that it just uses local information about relationships between profiles and relays and fetch notes:
- for the home feed, from all people we're following;
- for the profile view, from just that specific profile.
Since we'll be asking for very specific data from these relays, we do not care about where they're safe or not. They will never send us spam (and if they do that will just be filtered out since it wouldn't match our strict filter).
Now whenever the user clicks on a note we will want to display the replies view. In this case we will just query only the safe and the closed relays, since otherwise spam might be injected into the application. The same principle applies to the global feed view.
Other heuristics and corner cases
There are probably many corner cases not covered in this document. This was meant to just describe one way that seems to me to be sufficiently robust for a decentralized Nostr.
For example, how to display a note that was referenced by someone? If it has a relay hint we query that relay. If it doesn't we can try the relays associated with the person who have just mentioned it, or the same relay we've just seen the note that mentioned it -- as, when mentioning it, one might have published it directly to their own relays -- and so on. But all this may fail and then it is probably not a big deal.
Final thoughts
More important than all, is that we must keep in mind that Nostr is just a very loose set of servers with basically no connection between them, there are no guarantees of anything, and the process of keeping connected to others and finding content must be addressed through many different hackish attempts. To write Nostr applications and to use Nostr one must embrace the inherent chaos.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Um algoritmo imbecil da evolução
Suponha que você queira escrever a palavra BANANA partindo de OOOOOO e usando só alterações aleatórias das letras. As alterações se dão por meio da multiplicação da palavra original em várias outras, cada uma com uma mudança diferente.
No primeiro período, surgem BOOOOO e OOOOZO. E então o ambiente decide que todas as palavras que não começam com um B estão eliminadas. Sobra apenas BOOOOO e o algoritmo continua.
É fácil explicar conceber a evolução das espécies acontecendo dessa maneira, se você controlar sempre a parte em que o ambiente decide quem vai sobrar.
Porém, há apenas duas opções:
- Se o ambiente decidir as coisas de maneira aleatória, a chance de você chegar na palavra correta usando esse método é tão pequena que pode ser considerada nula.
- Se o ambiente decidir as coisas de maneira pensada, caímos no //design inteligente//.
Acredito que isso seja uma enunciação decente do argumento "no free lunch" aplicado à crítica do darwinismo por William Dembski.
A resposta darwinista consiste em dizer que não existe essa BANANA como objetivo final. Que as palavras podem ir se alterando aleatoriamente, e o que sobrar sobrou, não podemos dizer que um objetivo foi atingido ou deixou de sê-lo. E aí os defensores do design inteligente dirão que o resultado ao qual chegamos não pode ter sido fruto de um processo aleatório. BANANA é qualitativamente diferente de AYZOSO, e aí há várias maneiras de "provar" que sim usando modelos matemáticos e tal.
Fico com a impressão, porém, de que essa coisa só pode ser resolvida como sim ou não mediante uma discussão das premissas, e chega um ponto em que não há mais provas matemáticas possíveis, apenas subjetividade.
Daí eu me lembro da minha humilde solução ao problema do cão que aperta as teclas aleatoriamente de um teclado e escreve as obras completas de Shakespeare: mesmo que ele o faça, nada daquilo terá sentido sem uma inteligência de tipo humano ali para lê-las e perceber que não se trata de uma bagunça, mas sim de um texto com sentido para ele. O milagre se dá não no momento em que o cão tropeça no teclado, mas no momento em que o homem olha para a tela.
Se o algoritmo da evolução chegou à palavra BANANA ou UXJHTR não faz diferença pra ela, mas faz diferença para nós, que temos uma inteligência humana, e estamos observando aquilo. O homem também pensaria que há //algo// por trás daquele evento do cão que digita as obras de Shakespeare, e como seria possível alguém em sã consciência pensar que não?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Gerador de tabelas de todos contra todos
I don't remember exactly when I did this, but I think a friend wanted to do software that would give him money over the internet without having to work. He didn't know how to program. He mentioned this idea he had which was some kind of football championship manager solution, but I heard it like this: a website that generated a round-robin championship table for people to print.
It is actually not obvious to anyone how to do it, it requires an algorithm that people will not reach casually while thinking, and there was no website doing it in Portuguese at the time, so I made this and it worked and it had a couple hundred daily visitors, and it even generated money from Google Ads (not much)!
First it was a Python web app running on Heroku, then Heroku started charging or limiting the amount of free time I could have on their platform, so I migrated it to a static site that ran everything on the client. Since I didn't want to waste my Python code that actually generated the tables I used Brython to run Python on JavaScript, which was an interesting experience.
In hindsight I could have just taken one of the many
round-robin
JavaScript libraries that exist on NPM, so eventually after a couple of more years I did that.I also removed Google Ads when Google decided it had so many requirements to send me the money it was impossible, and then the money started to vanished.
-
@ 95a69326:6ac402e2
2024-01-14 11:15:12Bitcoin Romanticism: My Bitcoin Journey
The Beginnings: Indifference
In 2016, I was introduced to Bitcoin, and I discarded it. Swap the year for any other, and that sentence will resonate with many, maybe even you. So will the rest of my Bitcoin beginnings.
There was nothing exciting about it. From a technology standpoint, the mainstream narrative was portraying Bitcoin as the old dog, the first of its kind, the proof of concept. The word on the street was: “blockchain not Bitcoin”. Bitcoin was not where money could be made either, it had made its run and there was little upside left for it. Instead, the shinny new rocks were all the hype. The message was “crypto not Bitcoin”, and so I followed. As a result, I didn’t get involved with Bitcoin again until much later, almost two years later to be exact. Instead, I read up on tokenomics and how the world was going to change from it. Little did I know I was getting excited about digital barter. Anyhow, I bought some tokens, made money, and then lost the money. Frankly too much money, for a 19 year old.
Over time, I found my way back to Bitcoin and its community (yes Bitcoin has a community, fight me) and I observed the high conviction of Bitcoiners. Conviction was nothing new, every token pumping cheerleader seemed to have conviction. But this conviction felt different. It felt more grounded, more mature, dare I say, more honest. After reading some well-written articles on the topic, I found myself buying a copy of the Bitcoin Standard, and so began the next phase of this journey.
A New Love
I won’t call myself original for changing my views on Bitcoin by reading the Bitcoin Standard, but here we are. It wasn’t until I approached it from the economics side that I re-considered my interest in Bitcoin. Reading the Bitcoin Standard was my first step down the path of sound money, and everything it entails. I came out on the other side of it with a new-found conviction that restoring sound money would cure all ills. And thus a new-found motivation, no, a mission, to get involved with Bitcoin.
From that point, I saw Bitcoin as one of the most profound technologies of our time. Not because I fell in love with a neat data structure that everyone seemed to have gone crazy about, but because of the deep impact it could have on society, and on humans. The technology wasn’t the motivation, all the rest was.
Still, I fell in love, flaws didn’t exist or they were temporary. Bitcoin was the cure to everything. Wealth inequality? Bitcoin was the cure. World hunger? Bitcoin was the cure. Hitting your pinky on a corner? Bitcoin was the cure. There was a revolution happening in front of me, and I had to get involved. On the positive side, I had found a new interest, and a lot of motivation to work on it. On the negative side, I was freshly out of my Data Science bachelor, with a lot of hype around my field, and zero interest to pursue it. After studying abroad for three years, it wasn’t with a lot of joy that I moved back in with my parents, and so began the next phase of this journey.
Lightning Strikes: Getting My Hands Dirty
In 2018, the same friend that had introduced me to Bitcoin, introduced me to Lightning. This time I had learned my lesson, I didn’t discard it, instead I just ignored it for another two years. Funny how these things happen. But now, it’s 2020, and Lightning was my path to contribute to Bitcoin.
So, I’m a technical guy, I graduated in a technical field, but at that point I wouldn’t have called myself a software developer yet. For me programming was about writing python scripts, adjusting variables in statistical models, printing pretty charts and stitching the whole thing together in a Jupyter Notebook (if you know, you know). Those skills were not the most applicable in the Bitcoin space. After writing a few articles about the Lightning Network and graph theory, I realised my career in Bitcoin was going to be short lived if I didn’t try to build something “useful”. And what better way to learn than to start your own project? And for the next six months, I built, my first Bitcoin/Lightning app. It was called Spark. My biggest accomplishment in those six months, was getting shit for the unoriginal name from Fiatjaf. If you don’t know who Fiatjaf is, he’ll be all the more happy for it. Fast forward to mid-2020 and the app is out, ready for its million users. It was a scary moment, going from a lurker to a builder, and exposing my work to the world. The app gained some traction, it got a few mentions and survived a grand total of 6 months before I shut it down. I realised the app was breaking Twitter’s Terms and Conditions, and I didn’t want to risk getting my life’s work rugged by Twitter. I didn’t know it at the time, but it turned out being a prophetic decision not to build a business on top of Twitter’s API. Something something, meant to be, something something.
The app was dead, a year had passed since leaving university and the perspectives of work in Bitcoin and Lightning were bleak. I headed back to the drawing board, or to the chalk board in this case, as I signed up for two more years of university, and so began the next phase of this journey.
The Rebound
What better way to heal from this recent break-up, than to sign up for a rebound relationship: a Masters program in computer science and big data engineering. I had to arm myself with a few more tools before going back on the offensive in my pursuit of a Bitcoin career, and more studying did not sound like a bad idea.
The rebound was helpful, but my mind was still elsewhere, and the little book of Destiny was aware of it. Just a year later, I received a message from my now co-founder and friend, Mick. Mick had been on his own Bitcoin journey. Our paths had crossed before when his journey took him to Lightning, but this time, the conversation was different. Mick had an idea for a Lightning app, and he wanted to talk about it. The idea was simple: to help people fund ideas that come up on Twitter.
After a short hour of conversation, I was on board, and the next call was scheduled to two weeks later. The bi-weekly calls became weekly, then daily, and just two months later, late August 2021, we started building it. Here was my second chance at contributing to Bitcoin! Six months later, the first version of Geyser was out, and a few months after that our first angel investor, Brad Mills, was giving us a chance, and so began the next phase of this journey.
A New Relationship
Fast forward one year, and the side-project had grown enough to deserve a real chance. Geyser became a company and it was time to commit to it fully. After having completed all my Master courses, and only having the thesis left, I took my friend’s and family advice and completed my Master thesis before pursuing my dream job. No, scratch that, I dropped out and got to work. There wasn’t a minute to waste to push Bitcoin adoption, and so began the last phase of this journey.
And They Lived Happily Ever After
For two years, Geyser has continued to grow. It has had its ups and downs, but I couldn’t be happier. I am contributing to Bitcoin, and building one of the coolest (in my very unbiased opinion) projects in the space. And so, they lived happily ever after.
But wait, this is my Bitcoin journey, not my Geyser journey. So what’s the ending on that? Well, what initially sparked me to write this piece was seeing all the recent drama around Bitcoin, Lightning and the scaling debate. No, not the 2015-17 scaling debate (aka: The Block-Size Wars), I mean the 2023-24 scaling debate (aka: the Sidechain-Drivechain-Covenant-Ark-Lightning-Soft-Your-Fork Wars). After a few years of working at the application layer of Bitcoin, I observe those debates from a distance. I have not spent enough time studying them to judge on the merits of each protocol upgrade, but I wanted to share my journey to provide another perspective on the matter. One that isn’t technical.
For all its flaws, I owe Lightning and the Lightning developer community, everything good that has happened to me in this space. Lightning achieved what Bitcoin previously hadn’t, it opened up the possibilities for building consumer apps, for getting involved with Bitcoin at the application layer, rather than at the protocol layer. It made Bitcoin more tangible, more accessible and therefore less intimidating to become a part of.
What was the first Lightning API I used? It was the LNBits and LNPay API. Yes, those mostly custodial, layers of abstraction on top of lightning. But guess, what? Those custodial services dropped the first domino of the most important part of my Bitcoin journey, and if you believe it matters at all, of Geyser’s journey.
As important as protocol debates are, let’s not demonise imperfect solutions. Let’s not turn a blind eye on their qualities. Bitcoin is imperfect, Lightning is imperfect, but they work, and they evolve. And so will continue, my Bitcoin journey.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Money Supply Measurement
What if we measured money supply measured by probability of being spent -- or how near it is to the point in which it is spent? bonds could be money if they're treated as that by their owners, but they are likely to be not near the spendpoint as cash, other assets can also be considered money but they might be even farther.
-
@ 4657dfe8:47934b3e
2024-01-08 19:37:37Hey,
We're trying to maintain the most up-to-date and interesting list of various "LN apps" (Lightning and Nostr web apps) that can be used with Alby, calling it our Discover Page
However, for the past few weeks... we couldn't encounter anyting fresh and working. And it is very possible we overlooked some must-visit website.
What should we add to the Discover Page? Asking for nice recommendations.
We'll pay 2,100 sats per each! 😊 Cheers 🚀🚀
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How being "flexible" can bloat a protocol
(A somewhat absurd example, but you'll get the idea)
Iimagine some client decides to add support for a variant of nip05 that checks for values at /.well-known/nostr.yaml besides /.well-known/nostr.json. "Why not?", they think, "I like YAML more than JSON, this can't hurt anyone".
Then some user makes a nip05 file in YAML and it will work on that client, they will think their file is good since it works on that client. When the user sees that other clients are not recognizing their YAML file, they will complain to the other client developers: "Hey, your client is broken, it is not supporting my YAML file!".
The developer of the other client, astonished, replies: "Oh, I am sorry, I didn't know that was part of the nip05 spec!"
The user, thinking it is doing a good thing, replies: "I don't know, but it works on this other client here, see?"
Now the other client adds support. The cycle repeats now with more users making YAML files, more and more clients adding YAML support, for fear of providing a client that is incomplete or provides bad user experience.
The end result of this is that now nip05 extra-officially requires support for both JSON and YAML files. Every client must now check for /.well-known/nostr.yaml too besides just /.well-known/nostr.json, because a user's key could be in either of these. A lot of work was wasted for nothing. And now, going forward, any new clients will require the double of work than before to implement.
-
@ 2edbcea6:40558884
2024-01-07 22:05:42Happy Sunday Folks!
Here’s your #NostrTechWeekly newsletter brought to you by nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk written by nostr:npub1r3fwhjpx2njy87f9qxmapjn9neutwh7aeww95e03drkfg45cey4qgl7ex2.
The #NostrTechWeekly is focused on the more technical happenings in the nostr-verse.
Let’s dive in!
Recent Upgrades to Nostr (AKA NIPs)
1) (Proposed) Update to NIP-07: Nostr Browser Extension
monlovesmango is proposing updates to the capabilities of the Nostr Browser Extensions. Currently, browser extensions are primarily used to enable usage of Nostr clients without giving that client your Nostr private key.
Since NIP 44 was adopted, there are a few new Nostr actions creating encrypted content over Nostr that clients will want users to authorize without requiring users to input their private key. This NIP adds those new Nostr actions as something browser extensions should support going forward.
2) (Proposed) NOSTR Decentralized Advertising Network
ionextdebug is proposing creating a marketplace for advertising that runs over Nostr.
For those unfamiliar, there’s a constant bid and ask process for advertising space on platforms like Youtube or Google Adsense. The sell side is offering up ad space (for example 5-25 seconds at the beginning of a Youtube video for users within a specific demographic), and the buy side is bidding to put their ad in that spot. The highest bid wins the spot. This all happens in milliseconds every time you see an ad online.
This proposal outlines how this could be coordinated over Nostr instead of in Google’s walled garden. The use case would require Nostr to operate in ways it wasn’t designed for, so it may struggle to work in practice, but the NIP is early in the process of development.
Notable Projects
nsecBunker Update 🔐
nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft announced an update to nsecBunker that allows users to an OAuth-like experience when logging into Nostr clients.
Bunker providers are able to provide account creation to Nostr users that feels a lot like signing up for any legacy tech service (username, password, control over what each Nostr client can do on their behalf, etc). This creates a set of Nostr keypairs which are necessary to operate in the Nostr-verse, but they’re stored in the bunker.
When a Nostr client wants a user’s permission to read or write with those keys (e.g., to pull their timeline or help a user post something), the client asks the bunker for permission, which then asks the user if they want to grant that permission. Users are also able to have their bunker remember their selections of grant/reject permissions for a given Nostr client so that things are easier with fewer popups the next time around.
Additionally, users that are created and managed via nsecBunker have a lightning wallet and lightning address created automatically. If nsecBunker could also help folks manage their full Nostr profile (Kind 0 data like profile picture, name, etc,) this could be an out-of-the box solution for clients looking to add a simpler signup/login experience to new users that don’t mind giving up some control.
From what I saw in the code, the data (including Nostr private keys custodied by the bunker) are encrypted at rest, but currently that’s one key to encrypt all keys in the bunker. It may make sense in the future to extend the functionality to give users the ability to encrypt their keys with their own password, but there are downsides to that which may make self-custody the easier option.
There’s no lock-in for users since bunker providers can help users download their keys if they want to self custody or move to another provider.
This could be a foundational tool that allows an ecosystem of bunker providers emerge. Bunkers may be offered as part of Nostr clients, or independently; they may be offered for free, or for a fee. But the opt-in and interchangeable nature means that users will be able to choose what works for them, including moving to self-custody once they see the benefits. And self-custody may even just be a self-hosted bunker. 😉
Faaans 🎨
This is another project from nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft and although it’s still pre-launch, it has been teased as a Patreon replacement built on Nostr.
It seems like this project is one whose purpose and value is clear, but requires many capabilities that are novel to Nostr to be built first. From what I can see Faaans has built:
- Oauth-like flow using new nsecbunker functionality from 👆
- Ability to gate content until a Nostr user pays for it
- The presence of uploads and NIP 98 HTTP Auth makes me think that there’s a non-Nostr backend for handling uploads (maybe to also help with gating the content?)
Anyway, I’m excited to see how it turns out. Cutting out middlemen and giving creators full control over their relationship with their audience could be the killer app that brings a flood of usage and business to the Nostr ecosystem. 💪
Relay Auth to support privacy 🤫
NIP 42 allows relays to require clients to authenticate the user before the relay will take a requested action. This is useful, for example, for keeping DMs private.
By default all Nostr events can be queried by anyone that can connect to the relay. This is part of the magic of Nostr. But there are some kinds of Nostr events that likely should only be readable by a few folks. In the case of DMs, it makes sense to restrict who can download DM-type Nostr events to only those people involved in the DM.
This can be accomplished by relays requiring the user to authenticate with the relay before it returns DM-type Nostr events for that user. This announcement is that Damus looks to be adding support for Relay Auth, which may help with any number of features that can benefit from more privacy.
Latest conversations: Web of Providers
The culture of Nostr currently highly values self-custody, decentralization, and “don’t trust, verify” which is very admirable. It’s woven into the protocol itself. There are downsides to operating on these principles, but most of us judge it to be worth it.
It’s usually possible to have our cake and eat it too (have a nice experience and maintain our values), but it requires technological advances and building products based on those advances. This takes time.
In the interim, devs may build something that’s less self-custodial, less decentralized, but far easier to use. Most of the time this middle ground is far better than using some legacy system that’s completely custodial, centralized, and locks users in. But that’s only the case if it’s a stepping stone to a solution that is everything we need.
Example: Custodial Lightning Wallets
When Wallet of Satoshi stopped operating in the US, I definitely felt how vulnerable the zap ecosystem was to some overzealous bureaucrats. But I was able to switch providers in a matter of minutes and get back to sending and receiving zaps.
I definitely considered moving to Zeus or Mutiny or reviving my own Lightning Node on my Bitcoin Node, but it’s all still too difficult to manage a reliable experience passively.
On net, I’d argue that using custodial lightning is better than using Venmo or something to send and receive zaps. At least we’re operating in Bitcoin and not fiat.
A must have: a competitive ecosystem
The trade offs of a middle ground are easier to live with if there’s a robust and competitive ecosystem.
In the case of custodial lightning wallets, we’re encountering the issue of a lack of robustness. There are only a handful of providers that are able to handle zaps. Losing Wallet of Satoshi was a significant blow. We definitely need more lightning wallet providers in order to be robust against nation-state attack.
The ecosystem must also be competitive. People need to be able to switch providers or move to self custody with little or no cost.
NSecBunker
NSecBunker is an excellent example of a middle ground solution that maintains manageable trade offs while working to discover a more perfect solution.
If Nostr users want to use bunkers, it’s trivially easy to spin up a bunker and provide it to them. Existing clients may spin them up, or people looking to start a business in the Nostr ecosystem may create bunkers (maybe with some extra features) and charge for them.
At the end of the day, this technology is built in such a way that interoperability is easy and users aren’t locked in. The lift for Nostr clients to support bunkers is small, so bunkers may soon be as widely used as the Nostr browser extensions. Since users aren’t locked in to any bunker provider, it’ll be easy for a web of providers to pop up and serve users in unique ways to discover what works best.
Build tech to enable a web of providers
Building Nostr tech that has interoperability top of mind supports the Nostr ethos and enables the ecosystem to develop incrementally without giving up our values. Luckily, the protocol itself encourages interoperability with its very architecture. 🫡
Let’s reward devs when we see them doing this important work, they’re building an immense amount right now and it’s an incredible privilege to witness and beta test. 🍻
Until next time
If you want to see something highlighted, if we missed anything, or if you’re building something we didn’t post about, let us know. DMs welcome at nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk
Stay Classy, Nostr.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Token-Curated Registries
So you want to build a TCR?
TCRs (Token Curated Registries) are a construct for maintaining registries on Ethereum. Imagine you have lots of scissor brands and you want a list with only the good scissors. You want to make sure only the good scissors make into that list and not the bad scissors. For that, people will tell you, you can just create a TCR of the best scissors!
It works like this: some people have the token, let's call it Scissor Token. Some other person, let's say it's a scissor manufacturer, wants to put his scissor on the list, this guy must acquire some Scissor Tokens and "stake" it. Holders of the Scissor Tokens are allowed to vote on "yes" or "no". If "no", the manufactures loses his tokens to the holders, if "yes" then its tokens are kept in deposit, but his scissor brand gets accepted into the registry.
Such a simple process, they say, have strong incentives for being the best possible way of curating a registry of scissors: consumers have the incentive to consult the list because of its high quality; manufacturers have the incentive to buy tokens and apply to join the list because the list is so well-curated and consumers always consult it; token holders want the registry to accept good and reject bad scissors because that good decisions will make the list good for consumers and thus their tokens more valuable, bad decisions will do the contrary. It doesn't make sense, to reject everybody just to grab their tokens, because that would create an incentive against people trying to enter the list.
Amazing! How come such a simple system of voting has such enourmous features? Now we can have lists of everything so well-curated, and for that we just need Ethereum tokens!
Now let's imagine a different proposal, of my own creation: SPCR, Single-person curated registries.
Single-person Curated Registries are equal to TCR, except they don't use Ethereum tokens, it's just a list in a text file kept by a single person. People can apply to join, and they will have to give the single person some amount of money, the single person can reject or accept the proposal and so on.
Now let's look at the incentives of SPCR: people will want to consult the registry because it is so well curated; vendors will want to enter the registry because people are consulting it; the single person will want to accept the good and reject the bad applicants because these good decisions are what will make the list valuable.
Amazing! How such a single proposal has such enourmous features! SPCR are going to take over the internet!
What TCR enthusiasts get wrong?
TCR people think they can just list a set of incentives for something to work and assume that something will work. Mix that with Ethereum hype and they think theyve found something unique and revolutionary, while in fact they're just making a poor implementation of "democracy" systems that fail almost everywhere.
The life is not about listing a set of "incentives" and then considering the problems solved. Almost everybody on the Earth has the incentive for being rich: being rich has a lot of advantages over being poor, however not all people get rich! Why are the incentives failing?
Curating lists is a hard problem, it involves a lot of knowledge about the problem that just holding a token won't give you, it involves personal preferences, politics, it involves knowing where is the real limit between "good" and "bad". The Single Person list may have a good result if the single person doing the curation is knowledgeable and honest (yes, you can game the system to accept your uncle's scissors and not their competitor that is much better, for example, without losing the entire list reputation), same thing for TCRs, but it can also fail miserably, and it can appear to be good but be in fact not so good. In all cases, the list entries will reflect the preferences of people choosing and other things that aren't taken into the incentives equation of TCR enthusiasts.
We don't need lists
The most important point to be made, although unrelated to the incentive story, is that we don't need lists. Imagine you're looking for a scissor. You don't want someone to tell if scissor A or B are "good" or "bad", or if A is "better" than B. You want to know if, for your specific situation, or for a class of situations, A will serve well, and do that considering A's price and if A is being sold near you and all that.
Scissors are the worst example ever to make this point, but I hope you get it. If you don't, try imagining the same example with schools, doctors, plumbers, food, whatever.
Recommendation systems are badly needed in our world, and TCRs don't solve these at all.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Personagens de jogos e símbolos
A sensação de "ser" um personagem em um jogo ou uma brincadeira talvez seja o mais próximo que eu tenha conseguido chegar do entendimento de um símbolo religioso.
A hóstia consagrada é, segundo a religião, o corpo de Cristo, mas nossa mente moderna só consegue concebê-la como sendo uma representação do corpo de Cristo. Da mesma forma outras culturas e outras religiões têm símbolos parecidos, inclusive nos quais o próprio participante do ritual faz o papel de um deus ou de qualquer coisa parecida.
"Faz o papel" é de novo a interpretação da mente moderna. O sujeito ali é a coisa, mas ele ao mesmo tempo que é também sabe que não é, que continua sendo ele mesmo.
Nos jogos de videogame e brincadeiras infantis em que se encarna um personagem o jogador é o personagem. não se diz, entre os jogadores, que alguém está "encenando", mas que ele é e pronto. nem há outra denominação ou outro verbo. No máximo "encarnando", mas já aí já é vocabulário jornalístico feito para facilitar a compreensão de quem está de fora do jogo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28WelcomeBot
The first bot ever created for Trello.
It invited to a public board automatically anyone who commented on a card he was added to.
-
@ e1ff3bfd:341be1af
2024-01-06 19:41:35Over the last few months it feels the bitcoin community has gotten more and more jaded on lightning. To be honest, this is for good reason, back in 2017 we were promised a decentralized payment network that would always have cheap payments and everyone would be able to run their own node. Nowadays, the average lightning user actually isn't using lightning, they are just using a custodial wallet and the few of that do run lightning nodes often find it a burdensome task. For us at Mutiny Wallet, we are trying to make this better by creating a lightweight self-custodial wallet and in my opinion we have been executing on that dream fairly well. In this post, I'll analyze these issues and present a new way to view lightning and what that means for bitcoin going forward.
First and foremost one of the hardest UX challenges of lightning is channel liquidity. No other payment system has these problems today besides lightning so this often confuses lots of users. To make matters worse, there aren't any practical hacks that we can do to get around this. Muun Wallet used an on-chain wallet + submarine swaps to get around the channel liquidity problem, this worked very well until fees went up and everyone realized it wasn't actually a lightning wallet. The better solution is JIT liquidity like we do in Mutiny or splicing like that is done in Phoenix. These solutions abstract some of it away but not enough, we often get support questions confused on why some payments have fees and others do not. The fact is channel liquidity is not a usable UX for most end users.
The other major pain point of lightning is the offline receive problem. Inherently, you must be online with your private keys to sign and claim a payment. There is technically an ongoing spec proposal to be able to work around this (essentially creating a notification system of when people are online to receive payments), but it doesn't solve the fundamental problem and still has limitations. There has been a few attempts to get around this, most notably was Zeus Pay lightning addresses. These essentially worked by just creating stuck payments and waited for the user to come online to claim, this caused a ton of problems for people and even forced us at Mutiny to block users from paying them because it caused so many force closures. This is a hard problem because the entire rest of the bitcoin/crypto ecosystem works by just copy-paste an address and you can send to it whenever, there isn't caveats around asking your friend to open their wallet. This is further exacerbated by things like lightning address that requires a webserver to even get an invoice in the first place.
Channel liquidity and offline receives in my opinion are the two most obvious reasons why self-custodial lightning is not popular. When most users hear about any of these, they just think screw that and move to a custodial wallet because it is so much easier. If these were our only two problems, I think self-custodial lightning would be fine, it may never be the predominant way people use lightning, but we could get the UX good enough that we have a significant portion of people using lightning in a sovereign way. However, there are more problems under the surface.
Channel liquidity is a problem, but it is also deceptive. When you have 100k sats of inbound liquidity you would think you could receive up to 100k sats, but this isn't the case, often you can't actually receive any. This is because of on-chain fees, when a payment is being made in lightning you are creating pre-signed transactions that have outputs for every in-flight payment, these outputs cost potential on-chain fees and the high on-chain fees go the more it eats into your liquidity. After we've solved most of our force close issues Mutiny this has been number one support request. Even if you do everything right, understand liquidity and have enough for your payment, sometimes it still won't work because on-chain fees are too high. This is always really discouraging because isn't the whole point of lightning to not have to pay on-chain fees? Fundamentally, all current lightning channels could become entirely useless if on-chain fees went high enough because a single payment would require too many reserves. Obviously this is hyperbolic, but I hope I am getting the point across that on-chain fees don't just effect the opening and closing costs of channels, even if you are a diligent node runner that only opens channels when fees are low, that is not enough, your channels need to be large enough to pay for the on-chain fees of each HTLC at any future on-chain fee rate. As on-chain fees go up and up this problem will only get worse.
The proposed solution to these reserve issues are things like anchor channels, package relay, ephemeral anchors, etc. These are all well and good but kind of just mask the problem. They do solve it so the fee reserve can be much lower and possibly zero, however with the tradeoff that you need on-chain funds available to fee-bump your force closes so they can actually get into a block. This again breaks the UX for self-custodial users because they have hold on-chain funds alongside their lightning funds so they can do those on-chain fee bumps. The size requirements for their on-chain funds is still dynamically based on how high on-chain fees can spike. Solutions for this can include having someone else bump your transaction fees but this brings basically a trusted 3rd party into the mix and isn't ideal.
When you lay out all the different tradeoffs a lightning node needs to make, especially in a high fee environment, it makes me think, what are we doing here, are we going down the wrong path? Lightning is still fundamentally a fantastic payment protocol but its limitation is that it requires scale. Basically every problem I've outlined goes away when you have a large lightning node with lots of liquidity and high uptime so many we should optimize for that. The market has been telling us this for years already, +90% of lightning users are using custodial wallets because it works so much better at scale. So how can we use large scale lightning nodes without custodial wallets?
Combining existing large scale lightning infrastructure with self-custodial solutions sadly, isn't totally possible. The only real way to do that as of now is Muun Wallet which as we talked about earlier, doesn't really solve the problem because everything is just an on-chain transaction. However, Muun was onto something. The architecture of having a simpler protocol interface with lightning is genius and gives us the best of both worlds. We can make fast cheap payments and let the big boys collect fees for running the lightning node. Aqua Wallet just launched which is essentially a Muun Wallet but on top of Liquid, this is a good bandaid fix but doesn't get to the root of the problem.
Before we go further we should take a step back and break down what problems we are trying to solve. Bitcoin has a fundamental scaling limitation through the block size, if we could make infinite, then we wouldn't necessarily need any layer 2s because we could just make on-chain payments. However, we live in the real world and have a 1mb block size limit, and this limits the number of transactions we can make on-chain. Lightning is a huge improvement to bitcoin because we don't need to put every transaction on-chain, we just need to open a channel and can make seemingly countless payments. So why isn't lightning the silver bullet? Lightning lets us move payments off-chain but what it doesn't do is let us move ownership off-chain. Fundamentally lightning still relies on that, at the end of the day, a utxo goes to single user. So even if every on-chain transaction was a lightning channel, we still run into the limit of how many people can actually own those channels. What we need is another layer 2 that can scale utxo ownership and caninterop with lightning, that way we have a way to scale ownership combined with scaling payments.
So how do we scale ownership? Simply put, the answer today is custody, whether that is pure custodial like a Wallet of Satoshi or in the grey area like fedimints and liquid, the only way to do it today is through custody or federated bridges. In bitcoin, the only way to delegate ownership of a utxo to multiple parties is through multisig, however, that requires every user to be online when anyone wants to transact, and when you take go down this path far enough you end up just reinventing lightning.
Are we doomed then? Is there no way to scale bitcoin in a self-sovereign way? Luckily, the answer is no, but we need some soft-forks. Covenants are the way to scale bitcoin ownership. There are a bunch of covenant proposals but at their core what they propose to do is to add a way, so you can have a bitcoin address that limits where and how the coins in it can be spent. This can seem scary, but we already have these in bitcoin today, OP_CTLV (Check LockTime Verify), which was soft forked in 2016, only allows you to spend from a bitcoin address if the transaction has a given locktime, this lets you gate when a utxo can be spent. What the current covenant proposals do is let you gate where a utxo can be spent. With that simple primitive many different protocols can be built that allow for scaling ownership.
There are a lot of current covenant proposals, the main ones being: OP_CTV, OP_VAULT, OP_CSFS, OP_TXHASH, OP_CAT, and APO. They all have different functionality and tradeoffs but in my opinion we should be looking towards activating a form of covenants because otherwise we will likely be moving towards a future of less sovereign bitcoin users.
The future is not bleak however, even without covenants we can still scale bitcoin for the world, just not in the ideal way. At Mutiny, we are full steam ahead on implementing fedimint into the wallet, in my opinion (and the rest of the team's) it looks like the best current scaling solution for bitcoin. Fedimints give us the ability to dynamically share ownership over a group of utxos and is able to interop with lightning through gateways. It is the pinnacle of the scaling dream for bitcoin with current technology and I can't wait to help make it reality while we can.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Sol e Terra
A Terra não gira em torno do Sol. Tudo depende do ponto de referência e não existe um ponto de referência absoluto. Só é melhor dizer que a Terra gira em torno do Sol porque há outros planetas fazendo movimentos análogos e aí fica mais fácil para todo mundo entender os movimentos tomando o Sol como ponto de referência.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A list of things artificial intelligence is not doing
If AI is so good why can't it:
- write good glue code that wraps a documented HTTP API?
- make good translations using available books and respective published translations?
- extract meaningful and relevant numbers from news articles?
- write mathematical models that fit perfectly to available data better than any human?
- play videogames without cheating (i.e. simulating human vision, attention and click speed)?
- turn pure HTML pages into pretty designs by generating CSS
- predict the weather
- calculate building foundations
- determine stock values of companies from publicly available numbers
- smartly and automatically test software to uncover bugs before releases
- predict sports matches from the ball and the players' movement on the screen
- continuously improve niche/local search indexes based on user input and and reaction to results
- control traffic lights
- predict sports matches from news articles, and teams and players' history
This was posted first on Twitter.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Rede Relâmpago
Ao se referir à Lightning Network do O que é Bitcoin?, nós, brasileiros e portugueses, devemos usar o termo "Relâmpago" ou "Rede Relâmpago". "Relâmpago" é uma palavra bonita e apropriada, e fácil de pronunciar por todos os nossos compatriotas. Chega de anglicismos desnecessários.
Exemplo de uma conversa hipotética no Brasil usando esta nomenclatura:
– Posso pagar com Relâmpago? – Opa, claro! Vou gerar um boleto aqui pra você.
Repare que é bem mais natural e fácil do que a outra alternativa:
– Posso pagar com láitenim? – Leite ninho?
-
@ 3bf0c63f:aefa459d
2024-01-06 00:39:23Report of how the money Jack donated to the cause in December 2022 is being spent.
Bounties given
December 2023
- hzrd: 5,000,000 - Nostrudel
- awayuki: 5,000,000 - NOSTOPUS illustrations
- bera: 5,000,000 - getwired.app
- Chris: 5,000,000 - resolvr.io
- NoGood: 10,000,000 - nostrexplained.com stories
October 2023
- SnowCait: 5,000,000 - https://nostter.vercel.app/ and other tools
- Shaun: 10,000,000 - https://yakihonne.com/, events and work on Nostr awareness
- Derek Ross: 10,000,000 - spreading the word around the world
- fmar: 5,000,000 - https://github.com/frnandu/yana
- The Nostr Report: 2,500,000 - curating stuff
- james magoo: 2,500,000 - the Obsidian plugin: https://github.com/jamesmagoo/nostr-writer
August 2023
- Paul Miller: 5,000,000 - JS libraries and cryptography-related work
- BOUNTY tijl: 5,000,000 - https://github.com/github-tijlxyz/wikinostr
- gzuus: 5,000,000 - https://nostree.me/
July 2023
- syusui-s: 5,000,000 - rabbit, a tweetdeck-like Nostr client: https://syusui-s.github.io/rabbit/
- kojira: 5,000,000 - Nostr fanzine, Nostr discussion groups in Japan, hardware experiments
- darashi: 5,000,000 - https://github.com/darashi/nos.today, https://github.com/darashi/searchnos, https://github.com/darashi/murasaki
- jeff g: 5,000,000 - https://nostr.how and https://listr.lol, plus other contributions
- cloud fodder: 5,000,000 - https://nostr1.com (open-source)
- utxo.one: 5,000,000 - https://relaying.io (open-source)
- Max DeMarco: 10,269,507 - https://www.youtube.com/watch?v=aA-jiiepOrE
- BOUNTY optout21: 1,000,000 - https://github.com/optout21/nip41-proto0 (proposed nip41 CLI)
- BOUNTY Leo: 1,000,000 - https://github.com/leo-lox/camelus (an old relay thing I forgot exactly)
June 2023
- BOUNTY: Sepher: 2,000,000 - a webapp for making lists of anything: https://pinstr.app/
- BOUNTY: Kieran: 10,000,000 - implement gossip algorithm on Snort, implement all the other nice things: manual relay selection, following hints etc.
- Mattn: 5,000,000 - a myriad of projects and contributions to Nostr projects: https://github.com/search?q=owner%3Amattn+nostr&type=code
- BOUNTY: lynn: 2,000,000 - a simple and clean git nostr CLI written in Go, compatible with William's original git-nostr-tools; and implement threaded comments on https://github.com/fiatjaf/nocomment.
- Jack Chakany: 5,000,000 - https://github.com/jacany/nblog
- BOUNTY: Dan: 2,000,000 - https://metadata.nostr.com/
April 2023
- BOUNTY: Blake Jakopovic: 590,000 - event deleter tool, NIP dependency organization
- BOUNTY: koalasat: 1,000,000 - display relays
- BOUNTY: Mike Dilger: 4,000,000 - display relays, follow event hints (Gossip)
- BOUNTY: kaiwolfram: 5,000,000 - display relays, follow event hints, choose relays to publish (Nozzle)
- Daniele Tonon: 3,000,000 - Gossip
- bu5hm4nn: 3,000,000 - Gossip
- BOUNTY: hodlbod: 4,000,000 - display relays, follow event hints
March 2023
- Doug Hoyte: 5,000,000 sats - https://github.com/hoytech/strfry
- Alex Gleason: 5,000,000 sats - https://gitlab.com/soapbox-pub/mostr
- verbiricha: 5,000,000 sats - https://badges.page/, https://habla.news/
- talvasconcelos: 5,000,000 sats - https://migrate.nostr.com, https://read.nostr.com, https://write.nostr.com/
- BOUNTY: Gossip model: 5,000,000 - https://camelus.app/
- BOUNTY: Gossip model: 5,000,000 - https://github.com/kaiwolfram/Nozzle
- BOUNTY: Bounty Manager: 5,000,000 - https://nostrbounties.com/
February 2023
- styppo: 5,000,000 sats - https://hamstr.to/
- sandwich: 5,000,000 sats - https://nostr.watch/
- BOUNTY: Relay-centric client designs: 5,000,000 sats https://bountsr.org/design/2023/01/26/relay-based-design.html
- BOUNTY: Gossip model on https://coracle.social/: 5,000,000 sats
- Nostrovia Podcast: 3,000,000 sats - https://nostrovia.org/
- BOUNTY: Nostr-Desk / Monstr: 5,000,000 sats - https://github.com/alemmens/monstr
- Mike Dilger: 5,000,000 sats - https://github.com/mikedilger/gossip
January 2023
- ismyhc: 5,000,000 sats - https://github.com/Galaxoid-Labs/Seer
- Martti Malmi: 5,000,000 sats - https://iris.to/
- Carlos Autonomous: 5,000,000 sats - https://github.com/BrightonBTC/bija
- Koala Sat: 5,000,000 - https://github.com/KoalaSat/nostros
- Vitor Pamplona: 5,000,000 - https://github.com/vitorpamplona/amethyst
- Cameri: 5,000,000 - https://github.com/Cameri/nostream
December 2022
- William Casarin: 7 BTC - splitting the fund
- pseudozach: 5,000,000 sats - https://nostr.directory/
- Sondre Bjellas: 5,000,000 sats - https://notes.blockcore.net/
- Null Dev: 5,000,000 sats - https://github.com/KotlinGeekDev/Nosky
- Blake Jakopovic: 5,000,000 sats - https://github.com/blakejakopovic/nostcat, https://github.com/blakejakopovic/nostreq and https://github.com/blakejakopovic/NostrEventPlayground
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ 97c70a44:ad98e322
2024-01-05 22:03:37Today marks the biggest release so far in Coracle's history. There have been many good days, like when I introduced Coracle to the nostr telegram group, or when I got my fellowship with FUTO, or when I got my grant from OpenSats, or when I got to speak at Nostrasia. But in terms of realizing the vision I've had for the software - for over two years - today is the day.
Coracle now has private groups.
This means you can now send almost any nostr event over an encrypted channel to the rest of the group's members. This is substantially different from group chats, in that it uses rotating shared keys to provide weak forward secrecy, better scaling, and dynamic member access. This more closely approximates one of the most popular social media products in existence - The Nostr is now a direct competitor of The Facebook.
I built this for my community. I wanted something "good enough" to entice people to leave the advertising-fueled surveillance honeypot that is Facebook. In order to work, it needed to at least support notes, events, and marketplace listings. Although support is still quite basic, Coracle has checked all three of these boxes.
Before I get into the details though, it's important to mention that these groups should not be considered "private" any more than Facebook groups or Mastodon servers are (although privacy is substantially better). A better analog might be WeChat, which uses encryption with the same set of trade-offs. So don't post anything to private groups that might get you in trouble!
With that said, it's possible to run a highly private group. The backbone of this spec is e2e encryption, but relay selection can play an important part in hiding metadata from the rest of the network. If you have a relay you trust to protect notes and not share metadata, your security is significantly increased.
Prior art
Nostr-compatible group products aren't a totally novel thing, as it turns out. In fact draft NIP 112 has been around since June, and is already implemented in ArcadeCity. So why am I creating a new standard? I'l get into the positive benefits of my approach more below, but the quick answers are:
- The new encryption standard is going to break compatibility anyway. If we can end up with a better spec, now is the time.
- ArcadeCity development seems to have stalled.
- NIP-72 communities already have a ton of traction, and match what I'm trying to achieve with encrypted channels.
Of course I'm highly indebted to the project, the design of which is still visible in my final design.
Another product that exists to do something similar in a nostr-compatible way is Soapbox by Alex Gleason. This is a great project, particularly since his Mostr project bridges the ActivityPub world and Nostr. ActivityPub works well for highly centralized communities, but the architecture suffers from this centralization too. In particular, not even DMs are e2e encrypted, and just like regular notes are protected only by authentication enforced by servers.
Finally, there's NIP 29, which is fiatjaf's competing groups project. This has some interesting properties, for example the ability to "fork" a group by linking events together. However, similar to ActivityPub it relies exclusively on relays to protect user privacy, and in a fairly non-standard way. You do get to take advantage of nostr's multi-master architecture though, and signatures are also stripped from events in order to discourage propagation through the network.
None of these solutions quite satisfied me, so I built my own.
How it works
One of the coolest things about a NIP 72 community-based group spec is that is supports a spectrum of privacy requirements. A group admin might choose to publish group metadata privately so that it's only visible to the group, publicly so that other people can find the group and ask to join, or leave off a private component entirely.
Likewise, since private groups are backwards-compatible with public communities, it's easy to add a private component to existing groups. This can be useful especially for groups run by a business or content publisher, since public exposure is a good thing but certain group members might have more or less access. This could be used to support a patreon-type model, automating group membership based on subscription tier, for example.
An important aspect of the design that makes automation possible is the concept of a dedicated administration key. By decoupling this key from the original creator of the group, ownership can be shared as simply as sharing the key. This allows multiple admins to manage the group simultaneously either manually or using automations built into the group relays or special purpose bot-clients.
This of course raises the issue of admin access revocation, which isn't possible - that is, until we have a solution for key rotation for normal accounts. Once that's in place, the same process can be used to rotate group admin keys.
In the meantime, it's also trivial to reduce the exposure an admin key gets. You wouldn't generally want to simply paste the key wherever it's needed, but luckily that problem has already been solved as well. Instead of giving every admin or admin bot the key, it's trivial to set up an nsecbunker that authorizes each admin client - and can revoke access as needed.
This level of administration is of course fairly complex, but I think it's important to think through the requirements businesses and other advanced users will eventually impose and anticipate them as we're able, not through over-engineering, but through simple concepts that can be reused.
One other neat feature of this NIP is the definition of invite codes, which are essential for running a private group at any kind of scale. When requesting access to a group, a user can send along a "claim", which can be anything - for example a static invite code, a payment receipt, or an explanation of why they want to join. This claim can be validated by hand by a human, or processed by a bot to instantly admit the new member to the group.
When a new member is admitted to the group, the admin can either share an existing access key with them, or they can rotate the key for the entire group. If relays expire access keys after a certain amount of time, this can create a weak form of forward secrecy, where attackers won't be able to access old content, even if they gain access to the admin key.
Limitations and Future Work
The bar for new nostr clients has risen significantly since I first put Coracle out there. The new groups component is far more mature than Coracle was for much of its early life, but it has its rough edges. Many of these just need to be smoothed out through further UX work, but some are more technical in nature.
- The groups spec relies on NIP 44, which isn't yet available in most signer extensions. That means that unless you log in with your private key (please don't), you won't be able to create or gain access to any private groups.
- Hybrid groups (public groups with a private area) aren't really tested yet, or fully supported in Coracle's UI. It's an open question whether this is even a good idea, since it becomes pretty hard for users to know if they're posting publicly or privately in every context.
- Moderation is not implemented, so if you're creating a public group there is currently no way in Coracle to approve posts. Also, groups created in Coracle don't show up in Satellite for some reason — this is something I'll be working on improving.
- Whether this approach actually scales is another question. It's very hard to build member lists of hundreds of thousands of people, and without a relay helping to filter events, it might become prohibitively expensive to download and analyze all the events posted to a group. We'll see what develops as the design matures and the implementation undergoes stress testing.
Conclusion
Something I like about both nostr and bitcoin is that it empowers the users of the software. The corollary of this of course is that it's important to exercise this power with care - real damage can be done with this group spec, just as real damage can be done to bitcoin holders through low entropy key generation or poor key handling practices. So please, if you're going to implement this spec, communicate clearly with your users its limitations, and encourage them to run their own relays.
Nevertheless, I am stoked to be another 1% closer to my goal of helping my community - and anyone else who uses nostr - to exercise individual sovereignty and protect their freedom and privacy. Let's keep at it.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reclamações
- Como não houve resposta, estou enviando de novo
- Democracia na América
- A "política" é a arena da vitória do estatismo
- A biblioteca infinita
- Família e propriedade
- Memórias de quando eu aprendi a ler
- A chatura Kelsen
- O VAR é o grande equalizador
- Não tem solução
- A estrutura lógica do livro didático
- "House" dos economistas e o Estado
- Revista Educativa
- Cultura Inglesa e aprendizado extra-escolar
- Veterano não é dono de bixete
- Personagens de jogos e símbolos
- Músicas grudentas e conversas
- Obra aqui do lado
- Propaganda
- Ver Jesus com os olhos da carne
- Processos Antifrágeis
- Cadeias, crimes e cidadãos de bem
- Castas hindus em nova chave
- Método científico
- Xampu
- Thafne venceu o Soletrando 2008.
- Empreendendorismo de boteco
- Problemas com Russell Kirk
- Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The Lightning Network solves the problem of the decentralized commit
Before reading this, see Ripple and the problem of the decentralized commit.
The Bitcoin Lightning Network can be thought as a system similar to Ripple: there are conditional IOUs (HTLCs) that are sent in "prepare"-like messages across a route, and a secret
p
that must travel from the final receiver backwards through the route until it reaches the initial sender and possession of that secret serves to prove the payment as well as to make the IOU hold true.The difference is that if one of the parties don't send the "acknowledge" in time, the other has a trusted third-party with its own clock (that is the clock that is valid for everybody involved) to complain immediately at the timeout: the Bitcoin blockchain. If C has
p
and B isn't acknowleding it, C tells the Bitcoin blockchain and it will force the transfer of the amount from B to C.Differences (or 1 upside and 3 downside)
-
The Lightning Network differs from a "pure" Ripple network in that when we send a "prepare" message on the Lightning Network, unlike on a pure Ripple network we're not just promising we will owe something -- instead we are putting the money on the table already for the other to get if we are not responsive.
-
The feature above removes the trust element from the equation. We can now have relationships with people we don't trust, as the Bitcoin blockchain will serve as an automated escrow for our conditional payments and no one will be harmed. Therefore it is much easier to build networks and route payments if you don't always require trust relationships.
-
However it introduces the cost of the capital. A ton of capital must be made available in channels and locked in HTLCs so payments can be routed. This leads to potential issues like the ones described in https://twitter.com/joostjgr/status/1308414364911841281.
-
Another issue that comes with the necessity of using the Bitcoin blockchain as an arbiter is that it may cost a lot in fees -- much more than the value of the payment that is being disputed -- to enforce it on the blockchain.[^closing-channels-for-nothing]
Solutions
Because the downsides listed above are so real and problematic -- and much more so when attacks from malicious peers are taken into account --, some have argued that the Lightning Network must rely on at least some trust between peers, which partly negate the benefit.
The introduction of purely trust-backend channels is the next step in the reasoning: if we are trusting already, why not make channels that don't touch the blockchain and don't require peers to commit large amounts of capital?
The reason is, again, the ambiguity that comes from the problem of the decentralized commit. Therefore hosted channels can be good when trust is required only from one side, like in the final hops of payments, but they cannot work in the middle of routes without eroding trust relationships between peers (however they can be useful if employed as channels between two nodes ran by the same person).
The next solution is a revamped pure Ripple network, one that solves the problem of the decentralized commit in a different way.
[^closing-channels-for-nothing]: That is even true when, for reasons of the payment being so small that it doesn't even deserve an actual HTLC that can be enforced on the chain (as per the protocol), even then the channel between the two nodes will be closed, only to make it very clear that there was a disagreement. Leaving it online would be harmful as one of the peers could repeat the attack again and again. This is a proof that ambiguity, in case of the pure Ripple network, is a very important issue.
-
-
@ 26bd32c6:cfdb0158
2024-01-04 20:26:59I've been experiencing a bit of writer's block recently, which is somewhat strange because it's about topics I genuinely want to write about. In a way, I have so many different ideas that I want to express, I can't pin them down, leading to writing nothing at all.
It's funny because, for me, Nostr was that place where I could say anything. It didn't matter whether people hated or loved it. But now, I feel this daunting responsibility to share highly impactful words that can't be easily criticized by haters or lovers. It's as if suddenly, Nostr isn't my safe place anymore; it's become my serious place.
If, when reading this, you think it sounds as ridiculous as I feel, trust me, I won't take any offense. I am pushing myself here just by writing it down. The reason I'm writing this on a blog is due to this daunting feeling of needing perfection, which I like to call the 'traditional social media effect.'
The Traditional Social Media Effect
Definitely not a medical term or diagnosis, but I am now self-diagnosed with it, and well, "delulu makes trululu."
If you hang out long enough on Nostr, you'll start noticing some traditional social media patterns that freak me out. And because they freak me out, I feel the need to write them out, perhaps to leave them behind.
Everyone is now fighting over something, and whoever is the loudest will win the 'shitposter' award. And I love shitposting... but there's no need to disparage others' projects to ensure mine is the best.
Everyone is begging for attention. Guys, there's no algorithm; there is a high chance absolutely no one will read this, and that's okay with me. I am just creating this blog because I have things I want to get off my chest, and because nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 is hosting a cool contest that I just want to contribute to.
And the last part, everyone is begging for an easy fix. An easy fix that will get them zaps, an easy fix that will get them followers, an easy fix that will make them the next Nostr superstar, and solve all their life's issues.
This is just sad. What happened to PoW?
Yes, This Is about PoW!
I came into Nostr because I am a Bitcoiner, and I love complicated stuff that requires hours of my time, for me to barely understand anything, and then look at the rest of the world like, "Ha! You losers don't know what I know..."
Therefore, Nostr is not for anyone who is lazy... or at least it wasn't. It required effort; videos had to be carefully crafted to go viral and get zaps.
Actually, even e-girl content isn't top-notch anymore. A bunch of AI-generated fake content is getting more zaps than nostr:npub1cj8znuztfqkvq89pl8hceph0svvvqk0qay6nydgk9uyq7fhpfsgsqwrz4u's feet on #footstr.
Actually, these are the last feet to make it on Footstr, and they are from Walker, back in August... https://cdn.nostr.build/i/fb64c0f064e11dabff69d56e7e4edfd47aebd1c752b84e76d50fb0e43f9f7e6b.jpg We are lacking that good feet content.
Is She Going to Stop Ranting or Explain This Effect Thing?
Okay, here I go. What I mean is that it seems to me that we are here looking for something different, but wanting that same dopamine hit Instagram and Twitter gave us.
Where am I going with this? Well, I just wanted to post on Nostr because it made me happy. I just wanted to come here and say whatever silly thing came to mind.
But now, I just see a bunch of posts of people either fighting over which federation is going to fix layers and lighting or the latest e-beggar. And these beggars are seeking everything from attention to zaps, no limits or questions asked. Just "look at my sad face, fix my life please."
And these things are not PoW. PoW is about creating the best new federation, and I believe you did, just work on it and show it. Don't waste your time disparaging the other one.
You want a life change, a path change? Instead of coming here to complain, ask for advice, ask questions, look for a mentor.
STOP WHINING! Okay, I was trying that writing in caps thing to seem angry and badass. But now I just feel cringe...
This Was It, This Was the Rant...
I usually don't say anything because if I'm as annoyed as I am about the way feeds are starting to look, I think I will produce that same feeling in everyone else.
I've said it hundreds of times: the cool thing is that I choose what I want to see. And that's how I plan to fix this writer's block and get back to being more hands-on... back to building the Nostr I want to see!
I want to stop feeling the need to drop the most perfect note, and to do so, I am pushing myself to do a 30-day photography challenge.
I won't win anything, and if you want to join me, the prize is also absolutely nothing.
Why photography? Well, because I suck at it. I have no idea how to do it; I just started asking for help, and everyone loves to critique the composition of "art"... and I doubt anyone will call my terrible photography skills art... so I'm putting myself out there to be critiqued.
My 30-day challenge, starting today, will include daily reminders to take a picture of whatever I can, and an explanation of what I was trying to achieve.
It will also come eventually with more blogs and vlogs, with less ranting and more building. The world is just starting... at least that's how it feels to me.
We are so early... https://media.giphy.com/media/z85pBkLFP6izGn6see/giphy.gif?cid=ecf05e47kcqu2k7sshjuox6wsy8zrbms9210r1obb1d9ytof&ep=v1_gifs_search&rid=giphy.gif&ct=g
But for real, it is early... and that is why there's a lot we can do. And also why for real "There Is So Much Noise... I Can't Think Straight..."
So, join me or don't, in this challenge or just be challenged to put out there a better version of the things you see that bother you.
I think I sometimes feel angry and tired, but before I resort to running off to the woods and becoming a hermit, I'm going to stay hopeful and try to give to the world what I want the world to give me.
And yeah, that's it... This was the whole rant. I was just being louder so I could start thinking again.
If you are here and you've read the whole thing, thank you; you are truly resilient! I'm sure not even my mom made it all the way here (but just in case: Hi mom! I love you 🫶🏽).
With this, I say goodbye... going to try that photography thing and be back with another note in a bit!
Marce 💜
-
@ 4ba8e86d:89d32de4
2024-01-03 22:43:57desenvolvido por vfsfitvnm , que permite que você desfrute de todas as suas músicas favoritas do YT-Music sem limites. Com uma interface amigável e simples, basta abrir o aplicativo para pesquisar por músicas, artistas, videoclipes ou até mesmo listas de reprodução inteiras. O ViMusic oferece a conveniência de reproduzir música em segundo plano, mesmo quando o aplicativo está minimizado ou a tela está desligada, permitindo que você use outros aplicativos simultaneamente.
Principais Características: - Reproduza (quase) qualquer música ou vídeo do YouTube Music. - Reproduza em segundo plano para uma experiência contínua. - Armazene em cache pedaços de áudio para reprodução offline. - Realize pesquisas de músicas, álbuns, vídeos de artistas e listas de reprodução. - Marque seus artistas e álbuns favoritos. - Importe listas de reprodução personalizadas. - Visualize e edite letras de músicas ou letras sincronizadas. - Gerencie suas listas de reprodução locais e reordene as músicas na fila. - Escolha entre temas claro, escuro e dinâmico para personalizar a aparência do aplicativo. - Pule silêncios indesejados para uma experiência mais fluida. - Use o ViMusic como despertador para iniciar o dia com suas músicas preferidas. - Aproveite a normalização de áudio para manter o volume consistente entre faixas. - Integração com o Android Auto para uma experiência de música no carro. - Mantenha uma fila persistente para ter suas seleções sempre disponíveis.
O ViMusic oferece uma série de recursos especiais, incluindo alta qualidade de áudio, a possibilidade de ouvir música offline e acesso a uma enorme biblioteca de músicas. A experiência do usuário é livre de anúncios, o que proporciona uma audição ininterrupta e agradável.
Com o ViMusic, você desfruta de uma plataforma de streaming de música gratuita, fácil de navegar e que ocupa menos espaço para instalar. Aproveite a liberdade de ouvir música offline sem interrupções indesejadas de anúncios.
https://github.com/vfsfitvnm/ViMusic
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reasons for miners to not steal
See Drivechain for an introduction. Here we'll just have a list of reasons why miners would not steal:
- they will lose future fees from that specific drivechain: you can discount all future fees and condense them into a single present number in order to do some mathematical calculation.
- they may lose future fees from all other Drivechains, if the users assume they will steal from those too.
- Bitcoin will be devalued if they steal, because:
- Bitcoin is worth more if it has Drivechains working, because it is more useful, has more use-cases, more users. Without Drivechains it necessarily has to be worth less.
- Bitcoin has more fee revenue if has Drivechains working, which means it has a bigger chance of surviving going forward and being more censorship-resistant and resistant to State attacks, therefore it has to worth more if Drivechains work and less if they don't.
- Bitcoin is worth more if the public perception is that Bitcoin miners are friendly and doing their work peacefully instead of being a band of revolted peons that are constantly threating to use their 75% hashrate to do evil things such as:
- double-spending attacks;
- censoring of transactions for a certain group of people;
- selfish mining.
- if Bitcoin is devalued its price is bound to fall, meaning that miners will lose on
- their future mining rewards;
- their ASIC investiment;
- the same coins they are trying to steal from the drivechain.
- if a mining pool tries to steal, they will risk losing their individual miners to other pools that don't.
- whenever a steal attempt begins, the coins in the drivechain will lose value (if the steal attempt is credible their price will drop quite substantially), which means that if a coalition of miners really try to steal, there is an incentive for another coalition of miners to buy some devalued coins and then stop the steal.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: a website for feedback exchange
I thought a community of people sharing feedback on mutual interests would be a good thing, so as always I broadened and generalized the idea and mixed with my old criticue-inspired idea-feedback project and turned it into a "token". You give feedback on other people's things, they give you a "point". You can then use that point to request feedback from others.
This could be made as an Etleneum contract so these points were exchanged for satoshis using the shitswap contract (yet to be written).
In this case all the Bitcoin/Lightning side of the website must be hidden until the user has properly gone through the usage flow and earned points.
If it was to be built on Etleneum then it needs to emphasize the login/password login method instead of the lnurl-auth method. And then maybe it could be used to push lnurl-auth to normal people, but with a different name.
-
@ c31052c1:493a1c56
2023-12-31 06:40:24Mineral是世界上第一个由比特币挖矿现实世界资产(RWA)支持的BRC-420的铭文,在一波meme铭文的热潮过后,市场对于应用型铭文的需求将形成新趋势,Mineral在这时应运而生。
产品设计:
1.通过在二层质押Mineral获得比特币:不需要碰运气去等待价格起飞,无论市场如何变化,质押Mineral获得比特币收益。而在过往每个牛市,矿机(算力)本身也有不俗表现。
2.通过质押挖矿还可以获得二层国库代币Mner,国库将会随着项目发展不断增持新的RWA生息资产。除此以外,Mineral将与同样基于BRC-420环境中的其他项目合作,并在L2中,通过抵押Mineral铭文来实现一币多挖。
3.打铭文的用户都需要支付交易费,因此矿工挖矿收入也会随着铭文生态而水涨船高,铭文生态蓬勃发展逐渐成为共识。质押Mineral,即可像矿工一样,赚取交易费,赚取生态红利。
4.不会归0的铭文:meme铭文因投机而市场热度上升,当热点褪去可能就无人问津,所以铭文需要应用和赋能才能进一步推动生态的永续发展。Mineral除了有巨大的增长潜力,更重要的从RWA中捕获更坚实的底层价值。二层的路线路中,远期更有丰富DEFI玩法打造未来RWA DeFi航母.
铭文将带来ICO和DeFi的文艺复兴
我们之所以说铭文可能是比特币L1和L2上ICO和DeFi的文艺复兴,是因为DeFi的成熟和更具前景的市值和想象空间。有这么一个有趣的说法,以太坊和EVM就像比特币的测试网,大家都会记得他们在DeFi Summer的激情,并且在受过充分的市场教育后,深刻知道DeFi是如何运作的,BTC生态是铭文生长的沃土。此外,比特币的市值在加密货币的市值占主导地位。许多比特币持有者正在寻找建立在比特币网络上的原生应用,而铭文是基于比特币L1的产品,继承了比特币的优点,因此也将获得市场认可。Mineral结合比特币挖矿这个原生机制与铭文DeFi应用形成一种比特币专属的原生应用创新。Mineral非常适合大多数热爱并相信比特币和铭文的投资者。
Mineral是BRC-420上的进阶DeFi创新
Mineral受到良好的Defi模型启发,在其基础上更加精进和永续。
基于OHM(Olympus)和Infra(Blast)的成功,Mineral则是一种更为精进和永续的经济模式。OHM的高APY飞轮吸引了人们的注意,但我们团队坚信需要引入像比特币挖矿的正外部收益,让这个飞轮能够更持久而不会因为巨大的抛压归零。另一方面,我们认为Blast是实现生态系共荣的成功模式。我们结合并且优化这两个模型,让Mineral的持有人实现一币多挖而且永续感受到飞轮的刺激。
Mineral将很快在BRC-420官网发布,具体Mineral公平发射计划如下:
首批mint价格:等值50U的比特币
总量:40,000个
具体发射时间待定,请继续关注我们的官方推特。
Twitter: https://twitter.com/mner_club Telegram: https://t.me/mnerclub Discord: https://discord.gg/aB4KXpTv Website: https://www.mner.club/
-
@ e1ff3bfd:341be1af
2023-12-17 18:49:31A bunch of people have been shilling Liquid has a scaling solution with on-chain fees on the rise. I wanted to take the time to breakdown why this is a fool's errand and there are better ways to go about this.
Liquid is based on Elements which as they claim in their README is
a collection of feature experiments and extensions to the Bitcoin protocol
. Liquid is just another blockchain. It is a fork of bitcoin with a few fancy things added (Tokens, CT, covenants) and bundled together with a 1 minute block time, federated custody, and some blockstream branding.Blockchains do not scale. As we are seeing today, the bitcoin blockchain does not have enough throughput for everyone's transactions. This is for good reason, keeping the cost of running a full node low is a priority, this was one of the main reasons the blocksize wars were fought.
So why does Liquid exist? People lately have been touting it as a way to ease fee pressure but in my opinion this is a fool's errand, no different than people back in 2017 saying to use litecoin because fees on bitcoin were too high. Liquid is just a fork of bitcoin, it has the exact same scaling problems and the only reason it has smaller fees is because it is never really been used. For now, it can work as a temporary stop-gap (essentially finding arbitrage for fees), but building actual infrastructure on top of liquid will run into the exact same problems as on-chain bitcoin.
The problem is that Liquid is trying to use trust as a scaling solution but did it in a completely inefficient way. When you are trusting the 11-of-15 multisig, you don't need all the benefits that a blockchain gives you, everything is dictated by the functionaries anyways. The problem is if liquid gets any meaningful amount of users it will also end up with huge fees and we'll be back to square one because Liquid's architecture didn't actually leverage any of the trust tradeoffs it took and just inherited all the same problems of on-chain bitcoin.
There are real solutions available. Lightning is the obvious alternative but it does have it's own problems, I think a lot of people have been seeing the problems with small scale self-custodial lightning, it is extremely hard to scale. This is why I am extremely excited about fedimint. Fedimint has almost the exact same trust model of Liquid (a federated multisig) but is built on a much better architecture that actually allows for scaling. Fedimints don't have a blockchain but instead operate as a chaumian ecash mint. This allows for them to do actually innovative things instead of just being bitcoin plus a couple features. There isn't a block size, instead the transaction throughput is just gated by the processing power of the guardians. Smart contracts are limited by having to do everything on-chain with bitcoin script, they are pure rust code and allows for all sorts of crazy things. And it all still interoperates with Lightning, essentially giving a Wallet of Satoshi with way less rug-pull risk, tons of new features, and is extremely private.
All this said, it is sad we aren't talking about self-custodial scaling solutions. Today the only real one is Lightning and with current fees, it isn't reasonable unless you have a few million sats. The problem is that this is just inherently a limitation with Lightning. Lightning is excellent when you have high value channels and can make payments across the network, but it does excel at "pleb nodes" where one guy puts 100k sats to try it out, this comes with too many limitations with paying on-chain fees and needing to have reserves to pay future on-chain fees. However, this is potentially solvable. Lightning has solved the problem of scaling payments, where if you have channels, one on-chain transaction can represent many actual payments. What lightning did not solve is that one utxo still represents one user, and this is the limitation we are running into today. Currently the only way we solve this is using a multisig sig (Liquid and Fedimint), but we can solve this in a self-custodial way if we activated covenants. Covenants essentially let us give fine grained control of what is going to be spent from a UTXO before the UTXO even exists. Currently, there are a few proposals (CTV, APO, TXHASH) all with varying ways to do it and different tradeoffs, but imo something like this is desperately needed if we want any chance to scale bitcoin in a self-custodial way.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28bolt12 problems
- clients can't programatically build new offers by changing a path or query params (services like zbd.gg or lnurl-pay.me won't work)
- impossible to use in a load-balanced custodian way -- since offers would have to be pregenerated and tied to a specific lightning node.
- the existence of fiat currency fields makes it so wallets have to fetch exchange rates from somewhere on the internet (or offer a bad user experience), using HTTP which hurts user privacy.
- the vendor field is misleading, can be phished very easily, not as safe as a domain name.
- onion messages are an improvement over fake HTLC-based payments as a way of transmitting data, for sure. but we must decide if they are (i) suitable for transmitting all kinds of data over the internet, a replacement for tor; or (ii) not something that will scale well or on which we can count on for the future. if there was proper incentivization for data transmission it could end up being (i), the holy grail of p2p communication over the internet, but that is a very hard problem to solve and not guaranteed to yield the desired scalability results. since not even hints of attempting to solve that are being made, it's safer to conclude it is (ii).
bolt12 limitations
- not flexible enough. there are some interesting fields defined in the spec, but who gets to add more fields later if necessary? very unclear.
- services can't return any actionable data to the users who paid for something. it's unclear how business can be conducted without an extra communication channel.
bolt12 illusions
- recurring payments is not really solved, it is just a spec that defines intervals. the actual implementation must still be done by each wallet and service. the recurring payment cannot be enforced, the wallet must still initiate the payment. even if the wallet is evil and is willing to initiate a payment without the user knowing it still needs to have funds, channels, be online, connected etc., so it's not as if the services could rely on the payments being delivered in time.
- people seem to think it will enable pushing payments to mobile wallets, which it does not and cannot.
- there is a confusion of contexts: it looks like offers are superior to lnurl-pay, for example, because they don't require domain names. domain names, though, are common and well-established among internet services and stores, because these services have websites, so this is not really an issue. it is an issue, though, for people that want to receive payments in their homes. for these, indeed, bolt12 offers a superior solution -- but at the same time bolt12 seems to be selling itself as a tool for merchants and service providers when it includes and highlights features as recurring payments and refunds.
- the privacy gains for the receiver that are promoted as being part of bolt12 in fact come from a separate proposal, blinded paths, which should work for all normal lightning payments and indeed are a very nice solution. they are (or at least were, and should be) independent from the bolt12 proposal. a separate proposal, which can be (and already is being) used right now, also improves privacy for the receiver very much anway, it's called trampoline routing.
-
@ ecda4328:1278f072
2023-12-16 20:45:10Intro
I've left Twitter (X), WhatsApp, Telegram, Instagram, Facebook and Google. The driving force behind this decision was the escalating overzealous censorship. I cannot condone platforms that actively indulge in this practice. In all honesty, I've always felt uneasy using "free" platforms riddled with ads where the user is the product and doesn't own the content they produce.
Let's be real: hardly anyone thoroughly reads the Terms of Service (ToS).
Censorship and Shadow Banning
The final straw was when I resorted to a text editor for drafting messages/comments, hoping to rephrase them so they wouldn't get deleted moments after posting. This isn't exclusive to just one platform; I've encountered it on YouTube and LinkedIn too. Twitter (or X, as I now refer to it) has a history of shadow banning users' posts. It's been beyond frustrating to get banned from Telegram groups simply for posing legitimate questions.
You can test LinkedIn's censorship mechanism too, simply add "Binance" word (without quotes) to any of your comment and your post will disappear. At least that is what I've seen couple of months ago. Similarly, comments on YouTube often disappear precisely 60 seconds after posting if they contain specific keywords. I know they call it filtering, but it does not make any sense. In my opinion, legitimate companies and links shouldn't trigger these filters.
Community and Connections
Recently, I attended the Cosmoverse 2023 conference in Istanbul. Most attendees exchanged their Telegram or Twitter (X) contact information. Since I didn't have either, I gladly shared my Nostr and SimpleX Chat details. Many privacy advocates were quick to connect on SimpleX with me, though several didn't.
I learned about SimpleX Chat from Jack Dorsey, who mentioned it during a conversation in July:
While Signal has its shortcomings, I still keep it as a backup communication tool.
One More Last Straw
During the conference, I temporarily reinstalled Telegram to communicate with my group. Convincing nine individuals to switch to SimpleX on the spot seemed impractical.
At the conference, I bought a Keystone hardware wallet. Shortly after, I connected with the seller, Xin Z, on Telegram. However, I was banned from the official Keystone Telegram group right after posing a question.
Upon inquiring, Xin Z clarified that Telegram's official team had banned me, not the group's admin. 🤯
Business and Community: Collateral Damage
Censorship doesn't just silence voices; it hinders potential growth and stifles innovation. When platforms arbitrarily or aggressively censor content, they inadvertently create barriers between businesses and their potential clients. New users or clients, when encountering such heavy-handed moderation, may feel discouraged or unwelcome, causing them to retreat from the platform altogether.
Moreover, for businesses, this form of censorship can be devastating. Word-of-mouth, discussions, and organic community engagements are invaluable. When these channels are hampered, businesses lose out on potential clientele, and communities lose the chance to thrive and evolve naturally.
Censorship, in its overzealous form, breaks the very essence of digital communities: open dialogue. As platforms become more censorious, they risk creating sterile environments devoid of genuine interaction and rich discourse. Such an atmosphere is not conducive for businesses to foster relations, nor for communities to flourish. The ultimate price of overcensorship isn't just the loss of a few voices—it's the fragmentation of digital society as we know it.
Freedom to Choose
I strongly advocate for the adoption of Free and Open Source Software (aka FOSS) products. In these platforms, you aren't treated as the product. However, supporting them through donations/contributions is always an option. Platforms like Nostr and SimpleX Chat are excellent starting points.
My Nostr account:
npub1andyx2xqhwffeg595snk9a8ll43j6dvw5jzpljm5yjm3qync7peqzl8jd4
Disclaimer
This article reflects my personal experiences and opinions. It is not intended to criticize or demean the Keystone hardware wallet product or its quality. Furthermore, the actions taken by Telegram are not a direct representation of the views or policies of the Keystone Telegram group admins. Any reference to specific events or entities is made in the context of broader concerns about platform censorship.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Músicas novas e conhecidas
Quando for ouvir música de fundo, escolha músicas bem conhecidas. Para ouvir músicas novas, reserve um tempo e ouça-as com total atenção.
Uma coisa similar é dirigir por caminhos conhecidos versus dirigir em lugares novos. a primeira opção te permite fazer coisas enquanto dirige "de fundo", a segunda requer atenção total.
Com músicas, tenho errado constantemente em achar que posso conhecer músicas novas ao mesmo tempo em que me dedico a outras tarefas.
See also:
-
@ 4ef93712:97fe79c0
2023-12-11 21:17:25Introduction
Resolvr is building Open Source Justice by empowering sovereign communities to peacefully and voluntarily resolve their own disputes with open-source tools.
Our first product is designed for a community close to home: the Free and Open Source Software (FOSS) development ecosystem. We've built a peer-to-peer bounty marketplace that:
- gives developers assurances of payment for solving bounties,
- decentralizes and grows FOSS funding sources,
- unlocks access to the global talent pool,
- provides a frictionless on-ramp to earn Bitcoin (₿).
Resolvr does this by:
- limiting discretion of bounty grantors through reputational stakes and a Bitcoin (₿) escrow powered by Discreet Log Contracts🔮,
- resolving disputes through crowdsourced review of bounty solutions,
- using nostr🦩 for interoperable and censorship-resistant bounty discovery,
- using Lighting⚡ zaps for instant bounty payouts.
The alpha webclient is now live at resolvr.io.
The Problem
The root problem with conventional bounty marketplaces is all the trust that's required to make them work.
Satoshi Nakamoto (probably)
1. Centralized Custodians are Security Holes 🕳️
❌ Centralized bounty marketplaces that hold bounty funds can exit scam or go bankrupt.
- This year, BountySource (5,445 listed bounties worth $406,425 in 2019) stopped paying out bounties to developers.
❌ Existing bounty marketplaces, like Replit, don't have instant or free payouts:
- Devs are rewarded in Replit's 💩-coin and must request USD conversion by email.
- Replit charges a 25% withdrawal fee and requires a minimum withdrawal of $350.
❌ Knowledge workers in developing countries, many of whom rely on mobile wallets, have lost their funds via bank and telco attack vectors ($3M USD via 2,000 SIM cards, lost).
❌ Centralized bounty sites can censor posts for projects the site-owners disagree with or find unsavory.
2. Bounty Grantors are Judge 👩⚖️ and Jury 🏛️
❌ Bounty grantors have unlimited discretion over whether a solution meets their bounty criteria. And they can change the criteria after the dev has satisfied the original bounty.
❌ The Human Rights Foundation (HRF) recently changed its criteria on a nostr bounty after a dev provided a solution to the originally posted bounty. HRF decided to pay the dev only half the bounty.
❌ This is inherently unfair and inefficient!
Resolvr's Solution ✔️
- 📡 Distributed, interoperable, censorship-resistant communication protocol for bounty posting and discovery,
- (₿) Peer-to-peer lightning payouts and non-custodial escrow system,
- ⚖️ Decentralized Dispute Resolution
How it Works
Check out the Walkthrough here.
Zap Payouts and Crowdsourced Dispute Resolution
- Login to resolvr.io with your existing nostr keys (or let resolvr.io generate keys for you - remember to back them up!)
- Set up your profile by
- Linking your GitHub identity to your nostr profile (through resolvr.io or Amethyst on android) following NIP-39 instructions
- Getting a lighting address for zap payments (getalby.com)
- Maker (entrepreneur, foundation, FOSS project) - post a bounty with a detailed criteria and amount.
- Taker (freelance developer) - find a bounty to solve, apply to the bounty.
- Maker - assign a Taker to solve your bounty.
Ideal Path
- Taker - solve the assigned bounty and provide a link to the work product in the comments (e.g., github repo or PR)
- Maker - click "pay" to zap bounty reward
Happy Path
- If either party disputes the bounty: Taker or Maker clicks "poll" to initiate a nostr zap poll,
- Community members (other Makers/Takers) review bounty/solution and vote to resolve the dispute,
- Maker - if community votes in favor of Taker, zap payout.
Sad Path
- Maker does not comply with community decision, burns reputation and cannot find developers for future projects.
Innovations
Resolvr's success is aligned with the success of nostr🦩 and bitcoin. We're advancing the tech with novel applications:
- Authored NIP-43: bounties over nostr
- Discovered new use-case for NIP-69 zap polls: dispute resolution
- First p2p bounty marketplace with instant payouts over lightning
- First p2p bounty marketplace with integrated dispute resolution
- New frictionless onramp to earn bitcoin, grow circular economy
What's Next for the Resolvr Bounty Marketplace?
The next dispute resolution feature on the roadmap for the Resolvr bounty marketplace is on-chain DLC🔮 escrow. We're putting DLCs🔮 on nostr🦩 and building a desktop client to do it!
The desktop client will allow Makers and Takers to post, apply and assign bounties just like the web client. But it will also create and broadcast DLC🔮 escrow contracts. In the event of a dispute, the Resolvr oracle will attest to the results of the crowdsourced zap poll to release the funds to the winning party.
In the future, Resolvr will allow Makers and Takers to select their own oracles, and communities can be listed as "review association" oracles, earning bitcoin for resolving disputes over bounties (think: foundations, hackerspaces, bitdev meetup groups).
For more details about our DLCs🔮 on nostr, check out the Resolvr's escrow repo on GitHub!
In addition to DLCs🔮, Resolvr's roadmap includes:
- Decentralizing the default Resolvr oracle through FediMint🔅 protocol and FROST🥶.
- Scaling escrow through Lighting⚡ DLCs🔮
- Bounty review and code testing with AI🤖 (DVMs)
Beyond Bounties...
The Resolvr Project is building an open-source dispute resolution system on bitcoin and adjacent protocols.
Resolvr will revolutionize dispute resolution on a global scale, offering secure, open-source, customizable, decentralized, and radically cost-efficient mechanisms for infinite use cases.
Dispute Resolution is a big opportunity, with increasing demand driven by AI services and microtransactions.
Thanks for joining us on this journey to make FOSS funding more fair and efficient, expand access to Bitcoin, and provide communities with the tools to peacefully resolve their own disputes!
Visit resolvr.io to post and claim bounties today!
The Team
- Dave Schwab (🦩 | 𝕏): Chief Product Officer for a legal tech SaaS.
- Aaron Daniel (🦩 | 𝕏): Appellate attorney and dispute systems designer. Author of the Bitcoin Brief newsletter and regular contributing author to Bitcoin Magazine.
- Chris@Machine (🦩 | 𝕏): Nostr developer and creator of Blogstack.io, a longform nostr platform. Streaming nostr programming workshops on Zap.Stream and Youtube.
- Utibe Essien (🦩): Product designer and web adventurer.
- Ras (𝕏): Bitcoin hacker. Founder of Bitcoin Grove, a community accelerator and physical hacker space in Miami.
- Brian: Front end web developer and crypto enthusiast.
- Tommy (🦩): Google Software Engineer who enjoys making the State obsolete in his free time
- Randy (𝕏): Full-stack developer currently working on Greenlight at Blockstream
- Derek Hinkle: Backend Developer for a legal tech SaaS; machine learning and AI powered automation.
- Justin Moeller (🦩 | 𝕏): Spiral and HRF grantee working on Fedimint.
Connect with Resolvr!
- Post and Claim Bounties on the Resolvr Bounty Marketplace TODAY: https://www.resolvr.io
- Questions/Suggestions? Join our Discord: https://discord.gg/DsqRw8My4m
- Stream the team's weekly All-Hands Call, live every Monday @ 1:00pm ET on Zap.Stream
- Contribute to the project on our GitHub!!
- Witness Resolvr's evolution through our previous #buildinpublic weekly updates.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ 2edbcea6:40558884
2023-12-10 16:57:25Happy Sunday #Nostr !
Here’s your #NostrTechWeekly newsletter brought to you by nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk written by nostr:npub1r3fwhjpx2njy87f9qxmapjn9neutwh7aeww95e03drkfg45cey4qgl7ex2
The #NostrTechWeekly is a weekly newsletter focused on the more technical happenings in the Nostr-verse.
Let’s dive in!
Recent Upgrades to Nostr (AKA NIPs)
1) (Proposed) NIP 71: Video Events
If Nostr wants to disrupt Netflix or Youtube, we’ll need support for video content.
So far Nostr devs have mostly shied away from storing full files (videos, PDFs, images, etc) on relays in favor of storing files on dedicated file storage/CDN solutions (e.g. nostr.build). That has meant that to bring file-based content into the nostr-verse it has to be referenced within a note or (via NIP 94 ) just publish a Nostr event that is just a reference to a file.
NIP 94 is powerful because we can wrap something that’s not on Nostr in a Nostr event which makes it referenceable and make that reference shareable natively via relays.
This proposal is to create a wapper around NIP 94 to help facilitate more than just hosting the video content. This proposal helps allow video-based clients to do two things: 1) create a replaceable nostr event for the video (so if the creator uploads a new version or switches hosts, people can still reference the same Nostr event). 2) Track video view counts.
It’s still under development but could be helpful as we try to bring more content creators over to Nostr 💪
Author: zmeyer44
2) (Proposed) NIP 44: Places 🌏
Much physical commerce is discovered by consumers via map apps. You go on Google or Apple Maps to find an art supply store near your apartment, or coffee shops that are open right now when you’re in a new city. These centralized solutions have dominated, but Yondar seeks to change that.
Yondar is a Nostr client that allows users to publish places on a map. This could be places of business or events, really anything that has a location. It could unlock crowdsourcing the place-based apps we’re used to using, so we can wrest control back from the tech giants.
This NIP is a proposal to make “place” events a protocol that’s available to any Nostr client. So that the idea can spread beyond just Yondar. Can’t wait!
Author: arkin0x
Notable Projects
Updates to negentroopy and strfry 🛜
nostr:npub1yxprsscnjw2e6myxz73mmzvnqw5kvzd5ffjya9ecjypc5l0gvgksh8qud4 is the primary author of the StrFry relay. It’s one of the most popular relay implementations used by relay operators because it’s performant and extensible.
Doug announced a slew of improvements this week to a library underpinning strfy, called negentropy. Negentropy (from my limited understanding) is a protocol for set-reconciliation, which I would summarize as optimizing the process of syncing two sets of data quickly and with minimal compute resources and data transfer.
I don’t know the innards of strfry but I do know that when scaling systems like relays much of the difficulty comes from distributing data onto many servers so that the relay can hold increasing amounts of data but still serve users requests quickly. You usually can’t have your cake and eat it too.
Improvements to projects like negentropy allow us have relays which are both larger and more performant. They’re not the sexiest advances and usually go un-celebrated, but Nostr will be faster for users and relay operation should get cheaper because of this project.
Thanks Doug!
StrFrui ✔️
Relays are inundated with new events constantly and sometimes relay operators want to “sift” through new Nostr events pushed to the relay to ensure that unwanted content can be rejected. This has been done for rate-limiting, spam filtering, and sometimes outright npub censorship. Which as a relay operator is their right.
Problem is, as Nostr usage scales, relay operators will need more sophisticated tools to manage content on their relays. nostr:npub168ghgug469n4r2tuyw05dmqhqv5jcwm7nxytn67afmz8qkc4a4zqsu2dlc built StrFrui which is a framework for building sifters for relays running strfry (common, performant relay). Thanks for helping Nostr scale 🫡.
Latest conversations: nsecBunker
The landscape for security of Nostr keys is rapidly evolving. Here are the current options:
- Paste your nsec into the client - yikes, but sometimes there’s no alternative, like on iOS Nostr Browser Extension (nos2x, Nostr Connect, etc) - good if you’re on a browser, not encrypted at rest.
- Android Signing App - great if you’re on Android and the client supports it.
- nsecBunker - great security if you want to be hands on managing a bunker
We’re early so we’re still learning the architectures that work the best and those that will be compatible with future, more user-friendly experiences. I’m starting to think nsecBunker has the most potential, because it’s cross platform, encrypted at rest, and can be built so that users never have to manage Nostr keys if they don’t want to.
How does nsecBunker work?
Short answer: it’s complicated. Slightly longer answer: 1. Give the bunker your nsec (or let it generate one). 1. You authorize a client with certain permissions (can publish kind 1 notes for 10 hours, etc). 1. That generates and npub+token that you can use to log in to clients that support nsecbunker. 1. While that authorization lasts, that client can take authorized actions on behalf of the user who’s key is in the bunker.
There’s also a flow in reverse (starts with the client wanting to take an action), but I’m not going to get into it right now. It’s much harder to explain.
How is this better?
NsecBunker has a few advantages over other solutions:
1. Users never need to copy/paste the nsec 1. Nsec is encrypted in the bunker with a passphrase so even if bunker is hacked you’re ok.
1. Bunkers can work in any Nostr context (browser, mobile app, hardware device, whatever).The biggest downside to nsecBunker is that you need a Nostr account to be the admin of the keys stored in the bunker. Whereas the Nostr Browser Extensions you can have one account and self-manage it.
The pot of gold at the end of the rainbow
There are always going to be people that want to custody their own Nostr keys, but most humans likely won’t have the knowledge nor inclination to do so. What they want is something that’s more like the “Login with Twitter” button.
In this future, a user would sign up with a Bunker provider (these could be paid services). The user would use a email/password to manage their account with the provider. The user could generate as many Nostr accounts as they like, but the Bunker custodies the keys.
Clients would then have a “Login with Bunker” button that allows the user to put in their Nostr address (NIP 05) and that will be de-referenced to a bunker and a few relays to help the client and the bunker do all the coordination to get the user logged in and ready to use the client. It will basically be OAuth but using Nostr events as the transfer protocol instead of just HTTP requests.
This architecture allows the least technical users to have a familiar experience while benefiting from all the security and control tools that sophisticated companies still struggle with.
Best of all, if users can export their keys in the future, then they can always migrate out of a custodial solution and self-custody once they learn the advantages. More freedom for users and freedom tech still gets more accessible to everyday folks. nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft created a proof of concept of just such an onboarding flow, check it out here.
What needs to be built
While bunker providers would need to build some management software around nsecBunker (user/subscription management, passing along permission requests to users, etc) most of the work to make this possible is client-side.
More clients will need to support the authorization scheme that nsecbunker requires. This would require new login capabilities, as well as supporting nsecBunker style remote signing capabilities (to delegate authorization to the nsecbunker instead of relying on the user’s nsec being on the same device as the client).
It’s not insurmountable, but it is not trivial; and if history is any guide, client adoption will take time. We’ve seen that Nostr browser extensions still haven’t become universal, even though most developers will tell you they should be mandatory at this point.
My prediction is that iOS clients will lead the charge because iOS doesn’t have the same ability as android for apps to interact with each other (via Android Signing Apps) so nsecBunker may be their only way for iOS clients to stop requiring users to paste in their nsec directly into the client 😬.
I think we’re well on our way to this future, and it’s one that will be much easier for normies to utilize while still allowing for people to opt-out and self custody if they wish.
Until next time 🫡
If you want to see something highlighted, if we missed anything, or if you’re building something we didn’t post about, let us know. DMs welcome at nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk
Stay Classy, Nostr.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A crappy course on torrents
In 8 points[^twitterlink]:
- You start seeding a file -- that means you split the file in a certain way, hash the pieces and wait.
- If anyone connects to you (either by TCP or UDP -- and now there's the webRTC transport) and ask for a piece you'll send it.
- Before downloading anything leechers must understand how many pieces exist and what are they -- and other things. For that exists the .torrent file, it contains the final hash of the file, metadata about all files, the list of pieces and hash of each.
- To know where you are so people can connect to you[^nathole], there exists an HTTP (or UDP) server called "tracker". A list of trackers is also contained in the .torrent file.
- When you add a torrent to your client, it gets a list of peers from the trackers. Then you try to connect to them (and you keep getting peers from the trackers while simultaneously sending data to the tracker like "I'm downloading, I have x bytes already" or "I'm seeding").
- Magnet links contain a tracker URL and a hash of the metadata contained in the .torrent file -- with that you can safely download the same data that should be inside a .torrent file -- but now you ask it from a peer before requesting any actual file piece.
- DHTs are an afterthought and I don't know how important they are for the torrent ecosystem (trackers work just fine). They intend to replace the centralized trackers with message passing between DHT peers (DHT peers are different and independent from file-download peers).
- All these things (.torrent files, tracker messages, messages passed between peers) are done in a peculiar encoding format called "bencode" that is just a slightly less verbose, less readable JSON.
[^twitterlink]: Posted first as this Twitter thread. [^nathole]: Also your torrent client must be accessible from the external internet, NAT hole-punching is almost a myth.
-
@ 55b17d41:d6976606
2023-12-10 01:25:43Details
- ⏲️ Prep time: 15 min
- 🍳 Cook time: 1 hour
- 🍽️ Servings: 12
Ingredients
- 2/3 cup creamy peanut butter
- 1/2 cup powdered sugar
- 1/4 cup graham cracker crumbs
- 1 pinch of salt
- 1 12-oz bag of chocolate chips
Directions
- In a large bowl, mix peanut butter, powdered sugar, graham crackers, and salt until well combined.
- Melt the chocolate either with a chocolate melting pot or in the microwave in a medium size bowl. If using the microwave, cook for 30 seconds and stir. Add more time if needed until the chocolate is melted.
- Line a 12-cup muffin tray with cupcake liners and fill each one with ½ tablespoon of melted chocolate. Spread the chocolate around to cover the bottom of the liner. Place the tray in the fridge for a few minutes to let them set. Doing this prevents the peanut butter filling from sinking to the bottom of the chocolate.
- Place a tablespoon of the peanut butter filling on top of the chocolate, spreading it around to form a circle and then slightly pushing it down into the chocolate. Top the peanut butter with another ½ tablespoon of melted chocolate. Smooth out the top and make sure it completely covers the peanut butter underneath.
- Refrigerate for an hour or until the chocolate has hardened. You can enjoy them cold or at room temperature. Enjoy!
-
@ 21b41910:91f41a5e
2023-12-03 19:36:27Summary
A user lost 180,000 sats due to a series of unfortunate events involving nostr clients, user error, lack of validation checks, lack of key rotation or mitigation approaches.
What happened
On Saturday 2023 December 2, a user had their profile set to a zeuspay address. Zeuspay addresses are not readily zappable as the receiver needs to acknowledge each zap sent to them. A request was made that they change their lightning address.
The user chose to change to the getalby address that they had previously setup. After entering an older incorrect value, they went to update their profile, but then managed to accidentally post their LND Wallet connection string instead.
A fellow nostrich realized their mistake and informed them. The user began setting up a stacker.news account, while helper connected via bluewallet using the connection string and moved funds out. Helper noted to user that the alby account can't create a separate wallet per account or rotate to new wallet and in order to continue using getalby, they would need to establish an entirely new alby account with a different email. The funds were sent from helper to user's new stacker.news account.
After this, the user linked their stacker.news account with the alby account having the compromised wallet. The user moved some funds to a different account. Later, after going to sleep, the user found remaining funds were pulled from their stacker.news account. It's believed that Zeus wallet or Bluewallet was used with the wallet connection information to LNURL-Auth login to stacker.news
User believes their NSEC was compromised as well and has setup new Nostr account, marked old as compromised and establishing new wallets for zaps.
Realizations
Not only do Nostr keys have no key rotation, LNURL-Auth suffers from this as well. While implementation approaches can be done, we need to get more work in this area to establish common, consistent patterns for how to rotate to new keys, wallets, etc.
In the case of a lightning wallet being compromised its quite severe as different services have no way to know the wallet is compromised as there is by design no central server to query against.
Recomendations
Users
-
Take extra caution when setting up wallet connect information and lightning address in your preferred clients.
-
Make note of where your lightning address and wallet connection information is being used. E.g., track in a password manager.
-
Be prepared to immediately take steps to update if ever compromised.
-
Where possible, use a dedicated lightning wallet (without funds) for LNURL Auth logins, separate from your extensions for logging in and signing events for Nostr.
General Nostr Clients
-
Where user profiles can be edited, add appropriate validation checks on the LUD06 and LUD16 fields. There are plenty of profiles with junk data that dont make sense.
-
LUD06 should be deprecated, but should be a LNURL string if provided.
-
LUD16 should be preferred, and only allow for values in username@domain format. Here's a sample regex:
[a-z0-9-_]*@(([a-zA-Z]{1})|([a-zA-Z]{1}[a-zA-Z]{1})|([a-zA-Z]{1}[0-9]{1})|([0-9]{1}[a-zA-Z]{1})|([a-zA-Z0-9][a-zA-Z0-9-_]{1,61}[a-zA-Z0-9]))\.([a-zA-Z]{2,6}|[a-zA-Z0-9-]{2,30}\.[a-zA-Z]{2,3})
LNDHub, LNBits, Accounts type systems
- Add support for rotating from a compromised wallet account to a new one.
- Users and Admins should be able to mark a wallet as compromised.
- No spends should be able to performed from the account.
- Received funds should instead be diverted to replacement wallet.
- Can continue allowing LNURL-Auth to function
- Enhance API calls to ascertain if a wallet is marked as compromised
Blue Wallet, Zeus, any using LNDHub like accounts
- Add support for any new API calls that can indicate a wallet as compromised
- Recommend separation of wallets used for receiving value vs for LNURL-Auth
Alby Recommendations
-
Where Wallet Connection warning is, make it more prominent. Use bold red colors. Inform the user that this gives full access to the wallet and is non-revocable.
-
Add support for multiple wallets on an alby account. Preferably one per subaccount.
-
See notes for LNDHub about handling compromised wallets.
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Inefficiency
Imagine you have two IPFS nodes and unique content, created by you, in the first one. From the second, you can connect to the first and everyhing looks right. You then try to fetch that content. After some seconds it starts coming, the progress bar begins to move, that's slow, very slow, doing an rsync would have been 20 times faster.
The progress bar halts. You investigate, the second node is not connected to the first anymore. Why, if that was the only source for the file we're trying to fetch? It remains a mistery to this day. You reconnect manually, the progress bar moves again, halts, you're disconnected again. Instead of reconnecting you decide to add the second node to the first node's "Bootstrap" list.
I once tried to run an IPFS node on a VPS and store content on S3. There are two S3 datastore plugins available. After fixing some issues in one of them, recompiling go-ipfs, figuring out how to read settings from the IPFS config file, creating an init profile and recompiling again I got the node running. It worked. My idea was to host a bunch of data on that node. Data would be fetched from S3 on demand so there would be cheap and fast access to it from any IPFS node or gateway.
IPFS started doing hundreds of calls to S3 per minute – something I wouldn't have known about if I hadn't inserted some log statements in the plugin code, I mean before the huge AWS bill arrived. Apparently that was part of participation on the DHT. Adjusting some settings turned my node into a listen-only thing as I intended, but I'm not 100% sure it would work as an efficient content provider, and I'll never know, as the memory and CPU usage got too high for my humble VPS and I had to turn it down.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28ijq
An interactive REPL for
jq
with smart helpers (for example, it automatically assigns each line of input to a variable so you can reference it later, it also always referenced the previous line automatically).See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Trelew
A CLI tool for navigating Trello boards. It used vorpal for an "immersive" experience and was pretty good.
-
@ 2edbcea6:40558884
2023-12-03 18:55:51Happy Sunday #Nostr !
Here’s your #NostrTechWeekly newsletter brought to you by nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk written by nostr:npub1r3fwhjpx2njy87f9qxmapjn9neutwh7aeww95e03drkfg45cey4qgl7ex2
The #NostrTechWeekly is a weekly newsletter focused on the more technical happenings in the nostr-verse.
Let’s dive in!
Recent Upgrades to Nostr (AKA NIPs)
1) Deleted NIP 22: created_at limits
By its very nature Nostr has trouble reliably knowing when a piece of content was published. NIP 22 was an attempt to allow relays to be the authorities on when an event was created/published by only accepting events with a “created_at” that was recent according to the relay.
The challenge with this is rebroadcasting. The “created_at” can’t be modified because then the signature on the event that confirms the user authorized the event would be invalid (the signature is created by taking into account the created_at field). So if relays reject events that weren’t created recently, then events can’t be moved around (if a user wants to move relays or back up their content).
Overall, not many relays implemented NIP 22 because it caused too many problems. On top of that, whenever the “created_at” has to be reliable, devs are tending to use OpenTimestamps which is a non-Nostr but much more reliable system.
So, nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 deleted NIP 22.
2) (Proposed) NIP 88: Relay Notifications
The impetus for this NIP was the difficulty for paid relay providers to get their subscribers to pay their subscriptions. Currently it’s difficult to manage a monthly subscription with lightning, and most of the time you have to manually authorize the monthly payment every month.
This NIP outlines a way for relays to ask clients to pass along a message to users. The first use case is for relays to remind users to pay their subscription and include a link to pay it easily.
What’s nice about this is: 1) The protocol is more general than that specific use case, as the relay can send any kind of message to the user. Lots of possibilities! 2) This doesn’t require clients to enforce subscription agreements between the user and the relay, preserving decentralization and the opt-in nature of Nostr development.
Author: nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z
3) (Proposed) NIP 29: Image Metadata
There’s often metadata on images that is useful for clients displaying images (alternative text, dimensions, blurhashes, etc). There isn’t a standard way to provide that content to clients; this proposal changes all that.
It proposes an “imeta” tag to Nostr events so that when a user publishes an event with an image in it, their client can include this metadata to help future clients render it well. 💪
Author: staab
Notable Projects
Search and NostrDB 🔍
NostrDB is essentially a relay that lives within a Nostr client. This has many advantages for clients because it simplifies the client logic around managing events from many relays.
From what I can tell in my research, Damus uses NostrDB as the main relay that the app reads from when rendering the user experience in the app. Then it’s NostrDB’s job to sync events between NostrDB and the user’s desired relays.
Without NostrDB clients have to connect to all the users’ relays simultaneously and sync events. As syncing is going on, the client then needs to update the user’s experience. Doing both at once causes the client’s code to be more complicated and usually less performant. By separating the concern for syncing with the user’s relays to NostrDB, the client can focus on the user experience.
The new news here is that nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s has improved NostrDB to support very performant search. That means a client can sync events from the user’s relays, and then search that content in milliseconds without making a network call.
Without this, clients will need to reach out to each relay individually and make a filter request and wait for responses to trickle in. Then the client has to aggregate all the responses and show the user a result.
An architecture where there’s a NostrDB on the client, and/or a NostrDB powered relay that specializes in search, could make the work of discoverability much easier for clients. LFG!
Onboarding Normies 🛝
Rabble’s Talk on Nostr for Normies has motivated many thoughtful discussions (and some less-kind discussions) around how to attract and retain more regular folks on Nostr. But it seems no one is as prolific (in general but specifically) in their motivation to solve this problem as nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft .
This project is a demo of an experience for Nostr users to sign up for Nostr without ever touching the public and private keys. They create a username (NIP 05 address) and use an email to authorize any future recoveries of access to the Nostr account.
The experience is familiar to anyone that’s signed up for a service by logging in with Google or Twitter (which is most internet connected westerners). The experience still preserves choice for the user (where are they going to host their NIP 05 address, which nsecBunker will custody the keys, which email they will use to do recoveries, etc).
This could be the foundation of an onboarding experience that’s great for regular folks. Great work Pablo!
Better Backups 📦
Without the ability for users to move relays, Nostr isn’t truly censorship resistant. So, backup services will always be a core tool for the freedom-loving user of Nostr. nostr:npub1cmmswlckn82se7f2jeftl6ll4szlc6zzh8hrjyyfm9vm3t2afr7svqlr6f just released an update to NostrSync.
It feels more performant and feels simpler for the user. On top of that, users can now manage their backups, which means that we can start to take snapshots of our Nostr data. All stored locally so you continue to own your data.
Latest conversations: Onboarding Normies
The people in our community with the most experience building social products that people have adopted at scale have been telling us that Nostr is not ready for normies yet. ( Rabble’s Talk on Nostr for Normies and Jack’s latest assessment )
But we are seeing a ton of development that is making Nostr better for normies, and it seems like we’re close to all the lego pieces being ready. Here’s what I’ve seen lately.
Abstracting keypairs
Regular folks don’t understand public/private keypairs. It took decades for regular folks to accept usernames and passwords, and best case it’ll still be a while before regular folks understand public/private keypairs.
To help people join Nostr today, it’s sensible to offer services that wrap Nostr keypairs with a username/password experience that’s recoverable by email. People will understand it, and as long as they can still download their keys, they can self-custody whenever they want. If service providers end-to-end encrypt the actual Nostr keys, it could even be said to be non-custodial.
Pablo’s recent demo of an easy signup flow without managing Nostr keys is orders of magnitude easier to understand and educate new Nostr users on. Clients that adopt workflows like this could see retention increase quite a bit.
Finding and Filtering content
Decentralized systems are hard to search for specific content. It took a while for Google to solve the issue for web content. Twitter did incredibly well because of their search functionality. Nostr currently struggles to help people of all interests find the content and content creators they want.
Will’s work on NostrDB (as mentioned above) seems like it could be the foundation of performant search of Nostr content. Purpose-built search relays would be a valuable tool for clients helping users solve the discoverability, and we may be close to a truly great technological solution.
Approaches like Coracle’s introduction of “web of trust” based content discovery could also tackle the discoverability from a different direction. As is, it makes the global feed a bit more enjoyable. In the future, this pattern could power everything from discoverability, to semi-algorithmic feeds, to opt-in content moderation based on your social graph.
Part of making content enjoyable for regular folks is allowing and aiding people in filtering their feed how they like. Content moderation should not be forced on anyone, as that’s not in the spirit of Nostr, but rather allowing users to opt-in to filtering algorithms.
Rabble’s Nos.lol is taking the first steps towards using web of trust to help filter content in an opt-in way. Labeling tools like Ontolo could soon aid users in generating content warnings or lists of pubkey that are sources of content that people may want filtered.
Web of payments
Another challenge for normies is paying for all the services that power a great Nostr experience.
Quality relays Hosting images/videos Usernames (NIP 05) Translation Zaps
Currently there are free or custodial solutions but those can’t be run as charities forever, especially if there’s a surge of adoption.
Users are already not used to paying directly for social media, but paying independently subscriptions for all these services without being able to even use a credit card is jarring.
The problem of users paying with fiat but service providers receiving sats or a different fiat currency may soon be solved by Strike, but this is an area of underdevelopment.
Super-clients
For the freedom tech literate among us, it makes sense to sign up for services independently and manage your own self-custody lightning node/wallet to pay for services with sats. When you add up all the services this may be more expensive, and it’s definitely more work to manage, but those costs are worth it to many of us.
To service normies, we may see the development of super-clients. Where larger, well-funded clients (Primal, Nostr.Band, Damus, Amethyst, or even new players) may start to build services into the client to give an all-in-one solution for normies.
These super-clients may provide a NIP-05 address host, hosted nsecBunker with username/password interface, image/video hosting, translation, a quality relay, discoverability relays, translations, a few optional managed content moderation schemes, etc. All paid for via their hosted custodial lightning wallet the user can load with fiat but pay with sats.
Despite its trade-offs, this will make more sense to normies than the current paradigm. From what I can tell, we’re already seeing Primal working towards this level of capability. I imagine it’s very tempting for any team who wants their client to achieve scale. On balance this may not be a bad thing, but there are trade-offs to manage to minimize centralization.
Maintaining decentralization and censorship resistance
In a world that has super clients, they would be centralizing forces. They may even start to break from the protocol to build features they feel are necessary or give them a competitive advantage. This would have the side effect of decreasing interoperability and therefore decentralization.
To maintain the heart of Nostr, censorship resistance and freedom, while still offering an experience that can handle the needs of normies, it’s important to keep pressuring centralizing players to at least maintain perfect interoperability.
Users on super-clients should always be able to opt-out of aspects of the all-in-one solutions offered by these super-clients. If someone joins a super-client and wants to swap out the wallet, or the provider of content moderation, they need to be able to do that.
Users on super-clients should always be able to use other Nostr clients. If a user has their keys in a Primal nsecBunker they should be able to use that to log in to other clients and use them without difficulty.
Symbiosis with Normies
There is an exciting future equilibrium where everyone in the world is more free because of the work being done today on Nostr. A future that moves the needle for a good portion of humans requires us to find ways to onboard normies.
Normies bring more usage, and the possibility of more monetization. Through monetization we can attract more development and fund even more ambitious projects. Can’t wait to see it. 🙂
Until next time 🫡
If you want to see something highlighted, if we missed anything, or if you’re building something we didn’t post about, let us know. DMs welcome at nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk
Stay Classy, Nostr.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28OP_CHECKTEMPLATEVERIFY
and the "covenants" dramaThere are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
@ 00000000:3acedba7
2023-12-01 14:51:55El dictador de Venezuela, Nicolas Maduro amenaza con comenzar una guerra en nuestro continente, tratando de no extenderme demasiado voy a tocar ciertos hechos históricos recientes que a mi manera de ver las cosas nos han llevado a donde estamos hoy.
En la década de 2000s el fallecido dictador Hugo Chávez comenzaba a mostrarse como el dictador que fue y ante la creciente ola de críticas en todo el mundo quiso limpiar su reputación a punta de dinero así que orientó su política exterior a obtener el apoyo de la mayor cantidad de paises del continente americano, para esto hizo donaciones a gobiernos de izquierda, creó el Petro Caribe [1] para poder vender petroleo a crédito sin intereses a los estados miembros, lo cual fue una movida inteligente ya que hay muchas islas en el caribe, cada una de ellas es un país y cada país tiene derecho a voto en los organismos internacionales, podemos decir que con los precios históricos del petroleo chávez hizo operación chequera a cambio de su apoyo en la OEA y ONU, con respecto a Guyana, Chávez a petición de Fidel Castro le permitió explotar los yacimientos de petroleo ubicados en la región que hoy está en disputa entre Venezuela y Guyana, esta región es conocida como El Esequibo.
A la vez, la producción de petroleo iba en decline debido a la mala gestión del chavismo, pero como el precio del barril estaba sobre los 100 USD, no importaba, el chavismo seguía despilfarrando dinero como si no hubiera mañana.
Venezuela pasó de producir diariamente 3 millones de barriles de petroleo en 1999 a producir 703,658 (2022) [2], a la vez que Guyana en 2022 produjo 250,000 barriles diariamente.
Sinceramente creo que Maduro tiene dos razones para invadir Guyana:
- Obtener control de esos 250,000 barriles que exporta Guyana y las reservas
- Cambiar su posición en la historia de "El dictador de Venezuela" a el presidente que recuperó el esequibo
Sinceramente creo que si hoy se realizar una encuesta en Venezuela la inmensa mayoría opinaría que se debe recuperar el esequibo, pero probablemente nadie se ha puesto a pensar qué implica recuperarlo.
Actualmente viven 120 mil guyaneses en ese territorio, ¿Qué pasaría con estas personas?
Los venezolanos hemos visto como han actuado las fuerzas armadas cuando se han enfrentado con estudiantes disidentes, hemos sido testigos de relatos donde las víctimas han contado torturas [3], violaciones [4], asesinatos [5], desapariciones y otros crímenes de lesa humanidad, esto lo han hecho los militares tanto a niños de 13 años que han salido a protestar como a adultos, qué podemos esperar de su actuación en medio de la selva donde podrán llevar matanzas sin dejar testigos, donde no habrán personas filmando desde edificios como ocurría en las protestas en Caracas.
Los miserables militares venezolanos tendrán luz verde para realizar inimaginables aberraciones, sinceramente espero que no estalle una guerra porque lo que se viene sería un genocidio, ese sería el costo por recuperar El Esequibo, la destrucción de la vida de miles de personas, por eso no estoy de acuerdo con otra estúpida guerra "Patriota", tengamos o no tengamos razón, la vida de las personas es más valioso que cualquier disputa.
[1] https://es.wikipedia.org/wiki/Petrocaribe
[2] https://en.wikipedia.org/wiki/List_of_countries_by_oil_production
[3] https://www.youtube.com/watch?v=Vn2vphuy35U
[4] https://www.youtube.com/watch?v=rAP2sYk2AsY
[5] https://www.youtube.com/watch?v=jt0INSx4CEQ
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: clarity.fm on Lightning
Getting money from clients very easily, dispatching that money to "world class experts" (what a silly way to market things, but I guess it works) very easily are the job for Bitcoin and the Lightning Network.
EDIT 2020-09-04
My idea was that people would advertise themselves, so you would book an hour with people you know already, but it seems that clarify.fm has gone through the route of offering a "catalog of experts" to potential clients, all full of verification processes probably and marketing. So I guess this is not a thing I can do.
Actually I did https://s4a.etleneum.com/ (on Etleneum) that is somewhat similar, but of course doesn't have the glamour and network effect and marketing -- also it's just text, when in Clarity is fancy calls.
Thinking about it, this is just a simple and obvious idea: just copy things from the fiat world and make them on Lightning, but maybe it is still worth pointing these out as there are hundreds of developers out there trying to make yet another lottery game with Lightning.
It may also be a good idea to not just copy fiat-businesses models, but also change them experimenting with new paradigms, like idea: Patreon, but simple, and without subscription.
-
@ d8a2c33f:76611e0c
2023-11-27 16:52:20This is part of a series of articles that I am writing to better understand and explain how AI agents will transform how we create and consume content in the near future. If you want to start from the first article, please start here.
In the previous article, 2: What makes an AI agent unique?, I discussed how AI agents become unique through their specific instructions, domain knowledge, and actions.
Let's delve into the concept of knowledge. Simply put, you visit a dentist for teeth cleaning, an accountant for financial management, and a primary care physician for health check-ups. In each scenario, you seek their services because they possess the domain expertise that you require.
Consider another instance. Suppose you use a Software as a Service (SaaS) platform for payments and encounter a question. Your first instinct might be to initiate a Google search, which often results in a barrage of ads and irrelevant site links. After some scrolling, you find the correct website, click on it, and are presented with FAQs and support articles. Now, you must sift through this information to determine if it answers your question.
Imagine, however, if there was an AI agent equipped with FAQs, blogs, articles, and other useful information. All you would need to do is pose your question to this AI agent, and it would provide the appropriate information. This approach not only saves time and effort for the user but also enhances the overall user experience.
The importance of domain-specific knowledge in AI agents cannot be overstated. These agents, armed with specialized knowledge, can offer more accurate and insightful responses, thereby revolutionizing the way users seek information and interact with platforms. This is the future of AI - niche applications that are tailored, efficient, and truly transformative.
To see how this works, we created this functionality on PlebAI, where anyone can create AI agents. When creating them, you can add knowledge by attaching a PDF document, text file, or even a URL from a website. Behind the scenes, these data are retrieved, transformed into vectors, and stored in a vector store. This data can be both public and private as it is stored securely and only shared with LLM (Large Language Model) in the form of embeddings.
With this knowledge, the LLM can easily answer the user's questions correctly.
Here's how to add knowledge:
- Go to PlebAI
- Click 'Create Text AI agents'
- Fill in all the necessary information
- Add knowledge in the form of documents or website URLs
- Start using them either privately or publicly
Once you add all the necessary information, the AI agent is now equipped to answer any user question related to the stored knowledge. You can also enable user web browsing so that it can retrieve any additional knowledge available on their website. Here's the response from the Zaprite Help AI agent answering a question from its FAQ.
Marc Andreessen, a renowned entrepreneur and investor, has been quoted as saying, "The content of each new medium is the old medium." This quote encapsulates the idea that each new form of media tends to repurpose the content of its predecessor.
With this view, we can start to push website and app content into AI agents and make them available inside any chat interface.
Let me know what you think and if you have any feedback or comments.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28superform.xyz
This was an app that allowed people to create micro apps powered by forms. Actually just one form I believe. The idea was for the micro apps to be really micro.
For example, you want a list of people, but you can only have at most 10 people in the list. Your app could keep a state with list of people already added and reject any other submissions above the specified limit. This would be done with 3 lines of code and provide an automatic form for people to fill with expected data.
Another example, you wanted to create a list of people that would go to an event and each would have to bring one item from a list: you created an initial state of a list of the items that should be brought, then specified a form where people could write their names and select the item they would bring, then code that for each submitted form added the name of the person plus the item they would bring to the state while also removing the selected item from the available items. Also 3 or 4 lines of data.
Something like this can't be done anywhere else. But also of course it would be arcane and frighten normal people and so on (although I do believe some "normal" people would be able to use such a thing if they needed it, just like they learn to write complex Excel formulas and still don't call themselves programmers).
See also
- Etleneum, as it is basically the same core idea of a mutable state that is affected by calls, but Etleneum introduces (and actually forces the usage of) money, both in the sense that it acts as an escrow for contract results and that it mandates the payment of a small amount with each call, so it ends up not serving the same purposes.
-
@ d1e60465:c5dee193
2023-11-24 14:01:31Todos los que hemos utilizado Bitcoin alguna vez sabemos lo básico sobre una wallet y de la importancia de la frase de recupero (las 12 o 24 palabras). Esa frase de recupero es única para toda la wallet. También somos conscientes de que las direcciones no deben ser reutilizadas para proteger nuestra privacidad, entonces nuestra wallet genera virtualmente infinitas direcciones. Ahora bien, ¿cómo es eso posible teniendo sólo una única frase de recupero? ¿Acaso hay algo más, además de dicha frase, que determina dónde están nuestros bitcoins?
Aquí es donde entran las wallets HD (hierarchical deterministic = jerárquicas determinísticas) y los “derivation paths” (caminos de derivación). Dicho esto, no hay mejor momento que este para tomar nuestra pala e irnos a enterrar riquezas.
Enterrando nuestra riqueza
Supongamos que tenés un tesoro que querés resguardar y se te ocurre la genial idea de enterrarlo (lo hacían los piratas, por qué vos no?). Lo primero que hacés es buscar un terreno muy MUY grande, elegís una ubicación lo más aleatoria posible, tomás la pala, cavás el pozo, y enterrás el tesoro. Por último, y muy importante, te anotás las coordenadas de dicha ubicación (llave privada) para volver luego por tu riqueza cuando la necesites.
Supongamos ahora que pasa el tiempo y seguís generando riquezas que querés proteger. Siguiendo tu idea original volvés al mismo terreno a cavar mas pozos, enterrar más tesoros y anotar más coordenadas. Si esto se repite varias veces más llegará un momento en el que guardar tantas coordenadas resultará incómodo para llevar un correcto registro, y riesgoso ya que si perdés una de ellas perdés un tesoro.
Con la pala en mano, sentado sobre un pozo, y mirando el horizonte se te ocurre una genial idea. Podrías sólo anotar la coordenada de un único punto arbitrario y diferente a los demás dentro de ese terreno (llave privada maestra), en donde no vas a enterrar nada. Pero hacés que todos los pozos, donde sí enterrarás tesoros, se ubiquen de forma relativa al punto inicial de una manera específica (camino de derivación). De esta forma llegarías a cualquier pozo partiendo de esa ubicación original. Por ejemplo: “los tesoros estarán ubicados en intervalos de 10 pasos hacia el norte desde el punto origen”. Incluso podrías definir ciertos recorridos fijos y complejos dependiendo el tesoro. Por ejemplo: “caminando hacia el norte, cada 10 pasos, pozos con oro; caminando hacia el este, cada 5 pasos, joyas; y para regalos misceláneos irás con intervalos de 4 pasos al oeste, pero cada vez que caves caminarás 2 más al sur (es decir, como si formases una letra L)”.
Todo este mecanismo parece complejo a primera vista, pero tiene una ventaja fundamental y es que los recorridos no tienen que guardarse de manera secreta ya que sin la coordenada origen nadie podrá encontrar el tesoro. Gracias a esta particularidad podríamos definir patrones de recorrido que sean públicos y todos compartamos, e incluso convertirlos en un estándar global (BIP44 o BIP86). En otras palabras, los caminos para llegar a cada pozo serán de público conocimiento y podrán estar anotados en múltiples lugares, mientras que las coordenadas de partida serán secretas de cada individuo.
Para repasar, tenemos presente que para encontrar el tesoro necesitamos la coordenada original y el recorrido de pasos. Física y visualmente podríamos pensarlo como una hoja transparente donde están dibujados los recorridos, y otra hoja con un mapa (secreto) que marque el punto origen. Ambos superpuestos darán la información necesaria (wallet HD) para encontrar todos los tesoros enterrados. Ambos son necesarios, pero sólo uno de ellos necesita ser secreto.
Cambiando la pala por criptografía
En Bitcoin, esta idea de tener una llave privada maestra (el punto de origen) y luego diferentes caminos de derivación fue introducida en el BIP 32. Explicado en términos (muy) simples podemos partir de que nuestra llave privada maestra no es más que un número, y lo que hace este mecanismo es realizar operaciones matemáticas sobre ese número para obtener nuevos números: las llaves hijas.
En la práctica, una situación habitual que genera pánico en aquellas personas que comienzan en Bitcoin se da cuando restauran una wallet utilizando la frase de recupero y la wallet no muestra su balance. El terror de ver “0 BTC” en lugar de sus ahorros detiene el tiempo durante unos segundos. Sin embargo, un poco más de atención sobre la situación revelaría que no sólo se muestra un balance en 0, sino que tampoco se muestra ninguna de las transacciones realizadas previamente. Es decir, es como si estuviésemos mirando otra wallet. Lo que sucede en estos casos es que, si bien la frase de recupero y por ende la llave privada maestra (coordenada original) se restauró correctamente, la wallet en cuestión utiliza otro derivation path, otro camino. Entonces, por más que partimos desde el mismo punto a caminar, estamos caminando para el lugar equivocado para encontrar el tesoro. Esto se debe a que, si bien existen estándares de derivation paths, no todas las wallets los respetan al 100%. Esta web detalla los diferentes caminos utilizadas por cada una de las diferentes wallets conocidas que existen en el mercado, y así evitar situaciones desagradables: https://walletsrecovery.org
Existen más particularidades no cubiertas por la analogía, como que en este esquema también se hace presente una llave pública maestra que puede tener derivaciones y así generar múltiples direcciones (y monitorearlas) pero sin tener acceso a las llaves privadas de cada una para mover los fondos asociados. Aparece también el concepto de “hardened” (fortalecido o endurecido), que hace referencia a sub-llaves derivadas de las principales ajustando el algoritmo para evitar filtración de datos sensibles y que un atacante pueda reconstruir nuestra llave privada maestra. Todos estos detalles están explicados en el BIP para quien le interese.
Epílogo
En Bitcoin, una llave privada maestra sería el equivalente a elegir una coordenada origen cualquiera en todo el planeta (incluyendo agua)… de entre más de 900000000000000000000000000000000000000000000000000000000000000 planetas Tierra. Podemos estar tranquilos que la coordenada que elijamos seguramente no sea elegida por otra persona.
Links útiles
- https://walletsrecovery.org Wallets Recovery — Derivation paths por wallet
- https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki BIP32 — “Hierarchical Deterministic Wallets”
- https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki BIP44 — “Multi-Account Hierarchy for Deterministic Wallets”
- https://github.com/bitcoin/bips/blob/master/bip-0086.mediawiki BIP86 — “Key Derivation for Single Key P2TR Outputs”
Este artículo está inspirado en un hilo de Twitter que hice en 2021: https://twitter.com/diegogurpegui/status/1408931266616041475
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Método científico
o método científico não pode ser aplicado senão numa meia dúzia de casos, e no entanto ei-nos aqui, pensando nele para tudo.
"formule hipóteses e teste-as independentemente", "obtenha uma quantidade de dados estatisticamente significante", teste, colete dados, mensure.
não é que de repente todo mundo resolveu calcular desvios-padrão, mas sim que é comum, para as pessoas mais cultas, nível Freakonomics, acharem que têm que testar e coletar dados, e nunca jamais confiar na sua "intuição" ou, pior, num raciocínio que pode parecer certo, mas na verdade é enormemente enganador.
sim, é verdade que raciocínios com explicações aparentemente sensatas nos são apresentados todos os dias -- para um exemplo fácil é só imaginar um comentarista de jornal, ou até uma matéria inocente de jornal, aliás, melhor pensar num comentarista da GloboNews --, e sim, é verdade que a maioria dessas explicações é falsa.
o que está errado é achar que só o que vale é testar hipóteses. você não pode testar a explicação aparentemente sensata que o taxista te fornece sobre a crise brasileira, deve então anotá-la para testar depois? mantê-la para sempre no cabedal das teorias ainda por testar?
e a explicação das redinhas que economizam água quando instaladas na torneira? essa dá pra testar, então você vai comprar um relógio de água e deixar a torneira ligada lá 5 horas com a redinha, depois 5 horas sem a redinha? obviamente não vai funcionar se você abrir o mesmo tanto, você vai precisar de um critério melhor: a satisfação da pessoa que está lavando as mãos com o resultado final versus a quantidade de água gasta. daí você precisaria de muitas pessoas, mas satisfação é uma coisa imensurável, nem adianta tentar fazer entrevistas antes e depois com as pessoas. o certo então, é o quê? procurar um estudo científico publicado numa revista de qualidade (porque tem aquelas revistas que aceitam estudos gerados por computador, então é melhor tomar cuidado) que fala sobre redinhas? como saber se a redinha é a mesma que você comprou? e agora que você já comprou, o resultado do experimento importa? (claro: pode ser que a redinha faça gastar mais água, você nunca saberá até que faça o experimento).
por que não, ao invés de condenar todos os raciocínios como enganadores e mandar que as pessoas façam experimentos científicos, ensinar a fazer raciocínios certos?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: An open log-based HTTP database for any use case
A single, read/write open database for everything in the world.
- A hosted database that accepts anything you put on it and stores it in order.
- Anyone can update it by adding new stuff.
- To make sense of the data you can read only the records that interest you, in order, and reconstruct a local state.
- Each updater pays a fee (anonymously, in satoshis) to store their piece of data.
- It's a single store for everything in the world.
Cost and price estimates
Prices for guaranteed storage for 3 years: 20 satoshis = 1KB 20 000 000 = 1GB
https://www.elephantsql.com/ charges $10/mo for 1GB of data, 3 600 000 satoshis for 3 years
If 3 years is not enough, people can move their stuff to elsewhere when it's time, or pay to keep specific log entries for more time.
Other considerations
- People provide a unique id when adding a log so entries can be prefix-matched by it, like
myapp.something.random
- When fetching, instead of just fetching raw data, add (paid?) option to fetch and apply a
jq
map-reduce transformation to the matched entries
-
@ 03742c20:2df9aa5d
2023-11-19 04:19:18สำหรับผู้เริ่มต้นใช้งาน nostr ใหม่ที่ไม่คุ้นชินกับบิตคอยน์อาจยังไม่รู้จัก lightningnetwork แต่นั้นไม่ใช่ปัญหาหลักสักเท่าไหร่คุณสามารถไปศึกษาต่อได้ถ้าหากสนใจ แต่บทความนี้แค่อยากให้พวกคุณเริ่มต้นใช้งานและรับ zap ส่ง zap กันได้อยากสนุกในคือหลักการ V4V ของ Nostr
ตอนที่เราตั้งค่าโปรไฟล์ก็จะเห็นว่าให้ใส่ lightningaddresses แล้วเราเอามาจากไหนล่ะอีเมล์หรือหรืออะไร มันก็คือที่อยู่ของ lightning wallet ของเรานั้นเองเพิ่มที่จะรับ zap ได้ต้องเพิ่มส่วนตรงนี้เข้าไปด้วย
แล้วจะเอามาจากไหนล่ะ lightning wallet นั้นเอง
จริงแล้ว lightning wallet มีอยู่ 2 ประเภทหลักๆคือ custodial wallet, non-custodial wallet แต่ในบทความนี้ผมจะแนะนำ custodial wallet เพื่อการเริ่มต้นที่ง่ายที่สุด
มาเริ่มกันเลย
Wallet of satoshi
เป็นกระเก๋าที่ง่ายที่สุดเพียงติดตั้งแอปและเปิดใช้งานได้เลย แต่ผมจะแนะนำว่าอย่าลืมผูกอีเมล์เราไว้ด้วยเพื่อเป็น backup เวลาย้ายเครื่อง
จากนั้นกดคำว่า receive ก็จะแสดง lightningaddress เราก็สามารถก๊อปปี้มาใส่ได้เลยแค่นี้ก็รับ zap ได้แล้ว
Blink wallet
ติดตั้งลงเครื่องเปิดแอปมากดสร้างแอคเคาท์ใหม่เขาจะให้ใส่เบอร์โทรศัพท์เพื่อเป็นการ backup ข้อมูลกระเป๋า
จากนั้นกดที่ขีดสามขีดมุมขวาบนกดคำว่า แอดเดรส blink ของคุณ เขาจะให้เราตั้งชื่อ lightningaddress
จากนั้นก็ก๊อปปี้ lightningaddress ไปใส่ในแอคเคาท์ nostr เพื่อรับ zap ได้เลย
เป็นบทความแนะนำสั้นๆ หวังว่าจะมีประโยชน์สำหรับผู้เริ่มต้นใช้งานใหม่ครับ
ขอให้มีความสุขกับการใช้งาน nostr อย่าลืมมาแชร์ประสบการณ์ให้เราฟังด้วยนะ
ขอบคุณทุกคนที่แวะมาอ่านแล้วเจอกันใหม่ครับ
lightning #Zap #Nostr #Siamstr
-
@ 21b41910:91f41a5e
2023-11-17 22:25:34An Intro to Creating Reports for Invoices and Payments for the LND tool
by @vicariousdrama 714130–714326
Summary
The purpose of this article is to guide users through use of some of the command line operations to query their lightning node, and produce summary reports using basic formatting and JSON processing tools. This article assumes that you already have a lightning node running LND, and access to the command prompt. If you don’t yet have a lightning node, consider checking out assorted node projects, or even setting up a node using voltage.cloud. Commands are presented in a way that builds up the overall result gradually so that we can better understand each part that goes into the result. You can jump to the end of major sections if you just want to copy-pasta the end result.
Change History
| Date | Description | | --- | --- | | 2022-03-11 | Initial Document | | 2023-02-26 | Conversion to Markdown | | 2023-11-17 | Crosspost to Nostr via Yakihonne |
Invoices
Invoices are payment requests that have been initiated from a lightning node for which another should pay. The creation of requests is out of scope of this article and is often created using applications or interfaces like ThunderHub, Ride The Lightning, or a Lightning capable wallet.
Listing Invoices
From the command line, we can use the
lncli
tool, the control plane for your lightning network daemon (lnd) with the listinvoices command.This will output some invoices in JSON format. We can use the
jq
command to easily apply some formatting and color syntax coloring to improve readability. Later we will use this tool for filtering as well.If there are invoices, they will be listed in JSON format with several fields for each. While all these fields have their purpose, the ones we are most concerned about for reporting are the ones for
memo
,value
,settled
,creation_date
,settle_date
,amt_paid_sat
, andstate
.In follow up sections, the highlighted colors depicted here will be included in commands to draw attention to where they are used.
Only List Paid Invoices
This is the command to filter the results to only show those which were paid
shell lncli listinvoices | jq '.invoices[] | select(.state=="SETTLED")'
Only List Failed Invoices
This command alters the filter to show those which were eventually cancelled due to timeout before being paid.
shell lncli listinvoices | jq '.invoices[] | select(.state=="CANCELLED")'
List Invoices Paid for a Period of Time
To limit the invoices to a period of time, we will establish some variables and update the filter of those invoices that are being selected. The date command will return the seconds since epoch, and start on the first second of that day. Each day is comprised of 86400 seconds. For the end date, we will want to advance the result by one day, minus one second to ensure the entire day is included in the period.
Similarly, we can limit the invoices selected to a single month
Controlling the Output for Invoices
Up until now, our output has just been JSON. For reporting purposes, we will begin cleaning up the output to report information on a line by line basis. With the selected objects, we can further use the jq command line JSON processor to direct output concatenating values for a string.
Here is a sample result formatted from the string
Let’s use some trivial formatting to right align the value field by creating spaces padded on the left side. This makes it easier to read numbers when there are multiple rows. In this example, we take the value field, convert to a string using a builtin tostring function. This will then be written out with spaces before it up to 8 minus the length of the value using the built in length function. For example, if the value is 123, it has a length of 3, and to give it a total overall length of 8, 5 spaces will be written in front.
Here is the revised sample result showing the additional padding of spaces on the left side of the value.
To show our values lined up, let’s run the same report with all invoices. To do this, we’ll change the begin date to an earlier time. Your results will vary depending on how many invoices you’ve had in the past.
And here are some sample results showing alignment of the values.
Capturing Filtered Results for Invoices
Going forward, we’ll be performing multiple queries against the dataset. To avoid putting unnecessary load on the service itself, and ensure that we are always working with the same data between queries, we can capture the results to a variable and use that in our formatting.
Here, we capture the data to a variable named
REPORT_DATA
Now lets use that, and change our output to display the date in a readable format, followed by the memo field, and finally the amount right aligned.
To convert the date, we take the string, convert to a number using the builtin
tonumber
function, and then pass that throughtodateiso8601
another builtin function before parsing the Year, Month, and Day portions from it.The memo field will be left aligned, whereas right aligned fields create spaces before it, we want to write out extra spaces after its value.
Finally, for the value field, lets extend the length which will better account for a header later on.
And here is our revised sample results with the new formatting.
Creating a Header for Invoices
We can add a header to the report to put context to the data presented. Lets add both a report header for the period of time, as well as column names.
For the period of time, we’ll convert the numeric start and end date in seconds back to a readable time stamp. The column headers will be spaced out to match that of the data, and finally, we’ll create a line between the header and the data.
Our sample results
Let’s combine them so the command output runs together without interspersing the command line prompt.
The header looks much cleaner now.
Calculating the Total for a Footer for Invoices
It would be helpful if we summarized the total sats recieved for the period. We can do this by taking the inputs, converting the value to a number, reducing the array, initializing a temporary variable, and adding the value to it for each item in the array. That may sound more complicated then it is but don’t worry, it’ll become clear with the command and results. The results we will store in another variable.
To see the value, we can echo it out
Sample result
With the total, we can now put it into a footer. We’ll draw another line closing out the data and report the total
The sample footer
Overall Report of Invoices
Now lets combine all the portions of the above into one simple set of commands. We can take this and save it to a file for reuse later.
And here is the sample report of invoices
Payments
For reporting with payments, we’ll assume some of the same concepts that were outlined in the invoices section.
Listing Payments
To list successful payments, we can use the
lncli
command with thelistpayments
operation, and then follow up with filtering with thejq
JSON processor.If there are payments, they will be listed in JSON format with several fields for each. While all of these fields have their purpose, the ones we are most concerned about for reporting are
value
,creation_date
,fee
, andstatus
. In follow up sections, the highlighted colors depicted here will be included in commands to draw attention to where they are used.Within the
htlcs
field (hashed timelock contracts), there are more details about the routing path the payment took, and fees down to the millisat that were paid for each hop. For this basic report, we’ll stick to the basic rollup of fees rounded to the next sat.Capture the Filtered Payment Results
Let’s setup our reporting begin and end dates, and capture the matched payments to a variable.
Capture the Totals for Payments
Next, for reporting purposes, lets sum the total of invoices paid, the fees, and the overall total.
Overall Report of Payments
Similar to the invoices report, we prepare a payments report with a header block followed by data lines, and then a footer with the calculated sums.
Here’s a sample report output
Conclusion
Using the command line, we built up a report for invoices created with a lightning service. Basic filtering by date periods allows for flexibility in our reports. Textual alignment of data, formatting dates, and creation of headers and footers help make for a useful summary report. We then applied the same concepts to creation of a report for payments made. You may consider using this as a stepping stone to more reports and automation. If you think I should create more guides like this, or expand on it, please leave a comment on the article.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: "numbeo" with satoshis
This site has a crowdsourced database of cost-of-living in many countries and cities: https://www.numbeo.com/cost-of-living/ and it sells the data people write there freely. It's wrong!
Could be an fruitful idea to pay satoshis for people to provide data.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The flaw of "just use paypal/coinbase" arguments
For the millionth time I read somewhere that "custodial bitcoin is not bitcoin" and that "if you're going to use custodial, better use Paypal". No, actually it was "better use Coinbase", but I had heard the "PayPal" version in the past.
There are many reasons why using PayPal is not the same as using a custodial Bitcoin service or wallet that are obvious and not relevant here, such as the fact that you can't have Bitcoin balances on Bitcoin (or maybe now you can? but you can't send it around); plus all the reasons that are also valid for Coinbase such as you having to give all your data and selfies of yourself and your government documents and so on -- but let's ignore these reasons for now.
The most important reason why it isn't the same thing is that when you're using Coinbase you are stuck in Coinbase. Your Coinbase coins cannot be used to pay anyone that isn't in Coinbase. So Coinbase-style custodianship doesn't help Bitcoin. If you want to move out of Coinbase you have to withdraw from Coinbase.
Custodianship on Lightning is of a very different nature. You can pay people from other custodial platforms and people that are hosting their own Lightning nodes and so on.
That kind of custodianship doesn't do any harm to anyone, doesn't fracture the network, doesn't reduce the network effect of Lightning, in fact it increases it.
-
@ fab018ad:dd9af89d
2023-11-14 05:09:0018. แสงแห่งความหวัง
"ปัญหาพื้นฐานของระบบสกุลเงินปกติ อยู่ที่การยังจำเป็นต้องใช้ความเชื่อใจเพื่อให้มันทำงานได้ โดยธนาคารกลางนั้นจำเป็นต้องได้รับความเชื่อใจว่าจะไม่ทำเงินให้เสื่อมค่า แต่ในประวัติศาสตร์ของสกุลเงินเฟียตกลับเต็มไปด้วยการละเมิดความเชื่อใจนั้น”
–ซาโตชิ นากาโมโตะ
ไม่ว่าคำตอบของความยากจนในประเทศโลกที่สามจะเป็นอะไรก็ตาม เราก็แน่ใจว่ามันต้องไม่ใช่การก่อหนี้เพิ่มขึ้นอย่างแน่นอน “คนจนบนโลกนี้นั้น” คุณ Payer ได้สรุปไว้ว่า “ไม่ได้ต้องการธนาคารแบบธนาคารโลกเพิ่มขึ้นมาอีก ไม่ว่าธนาคารนั้นจะจิตใจดีมีเมตตาแค่ไหนก็ตาม สิ่งที่คนจนต้องการคืองานที่จ่ายค่าตอบแทนอย่างเหมาะสม รัฐบาลที่มีความรับผิดชอบ การมอบสิทธิมนุษยชนคืนให้พวกเขา และความเป็นเอกเทศในการบริหารประเทศโดยไม่ถูกต่างชาติแทรกแซง”
แต่เป็นเวลากว่า 7 ทศวรรษแล้วที่ธนาคารโลกและกองทุน IMF กลับตั้งตัวเป็นศัตรูกับทั้ง 4 สิ่งนี้
เมื่อมองไปในอนาคตข้างหน้านั้น คุณ Payer กล่าวว่า “ภารกิจที่สำคัญที่สุดสำหรับเหล่าประเทศร่ำรวยที่กังวลเรื่องความเป็นปึกแผ่นระหว่างประเทศอื่น ๆ คือการเดินหน้าอย่างจริงจังในการระงับการไหลของเงินให้ความช่วยเหลือไปสู่ต่างประเทศ” แต่ปัญหาคือระบบในปัจจุบันถูกออกแบบและสร้างแรงจูงใจให้การไหลของเงินแบบนี้ยังมีต่อไปได้ ทางเดียวที่จะสร้างการเปลี่ยนแปลงได้คือการปรับเปลี่ยนกระบวนทัศน์ใหม่ทั้งหมด โดยเรารู้กันอยู่แล้วว่าบิตคอยน์สามารถช่วยให้ประชาชนทั่วไปในประเทศกำลังพัฒนานั้นได้รับอิสรภาพทางการเงิน ช่วยให้พวกเขาหนีออกจากระบบเงินซึ่งล่มสลาย แต่ถูกผู้นำฉ้อฉลและสถาบันการเงินระหว่างประเทศบังคับให้พวกเขาต้องใช้ ประเด็นเหล่านี้ถูกเร่งความเร็วขึ้นในการประชุมที่เมืองอักกราเมื่อปลายปี 2022 ที่ผ่านมา
บิตคอยน์เป็นสิ่งที่อยู่ตรงข้ามกับระบบเดิมที่ถูกออกแบบโดยธนาคารโลกและกองทุน IMF แต่ทว่าบิตคอยน์จะสามารถเปลี่ยนพลวัตในโครงสร้างอำนาจและโครงสร้างทรัพยากรของโลกระหว่างประเทศแกนกลางและประเทศชายขอบได้จริงหรือ?
คุณ Nabourema นั้นเต็มไปด้วยความหวัง และไม่เข้าใจว่าทำไมฝ่ายซ้ายโดยทั่วไปมักประณามหรือไม่สนใจบิตคอยน์
“เครื่องมือที่สามารถทำให้ผู้คนสร้างและเข้าถึงความมั่งคั่งได้อย่างเป็นอิสระจากองค์กรที่มีอำนาจควบคุมนั้น สามารถที่จะมองว่ามันเป็นเครื่องมือของฝ่ายซ้ายได้เลยด้วยซ้ำ” เธอกล่าว “ในฐานะนักเคลื่อนไหวที่เชื่อว่าประชาชนควรได้รับเงินสกุลที่ให้ค่าต่อชีวิตและการเสียสละของพวกเขาแล้ว"
"บิตคอยน์คือเป็นการปฏิวัติของประชาชน”
“ฉันว่ามันเจ็บปวด” เธอกล่าว” ที่ชาวไร่ในเขตแอฟริกาบริเวณพื้นที่ใต้ทะเลทรายสะฮาราได้รับเงินแค่ร้อยละ 1 ของราคากาแฟในตลาดโลก ถ้าเราสามารถไปถึงขั้นที่ชาวไร่เหล่านั้นสามารถขายกาแฟของพวกเขาได้โดยตรงถึงผู้ซื้อ โดยไม่ต้องมีสถาบันตัวกลางใด ๆ มาแทรกแซง พร้อมทั้งรับเงินค่ากาแฟเป็นบิตคอยน์ คุณลองนึกดูสิว่ามันจะทำให้ชีวิตของพวกเขาเปลี่ยนไปมากแค่ไหน”
“ทุกวันนี้” เธอกล่าว “เหล่าประเทศกำลังพัฒนาอย่างพวกเรายังต้องกู้เงินในสกุลดอลลาร์สหรัฐ แต่เมื่อเวลาผ่านไปค่าเงินของเราเสื่อมค่าลงเรื่อย ๆ และต้องลงเอยด้วยการต้องหาเงินมาจ่ายเพิ่มขึ้นมากกว่า 2-3 เท่าของหนี้ในตอนแรก”
“ที่นี้คุณลองนึกภาพ” เธอกล่าว “ถ้าเราไปถึงขั้นที่ในอีก 10-20 ปีข้างหน้าที่บิตคอยน์เป็นสกุลเงินหลักของโลกและถูกใช้ในธุรกิจทั่วโลก ชาติต่าง ๆ ต้องกู้ยืมเป็นสกุลบิตคอยน์และใช้จ่ายในสกุลบิตคอยน์ ทุกชาติต้องจ่ายหนี้ของพวกเขาคืนในสกุลบิตคอยน์ ในโลกแบบนั้นรัฐบาลต่างชาติจะไม่สามารถเรียกร้องให้เราใช้คืนหนี้ในสกุลเงินของพวกเขา ซึ่งเป็นสกุลที่พวกเราต้องทำงานหนักเพื่อหามา แต่พวกเขาสามารถพิมพ์เพิ่มจากอากาศได้ง่าย ๆ และหากพวกเขาอยากที่จะเพิ่มอัตราดอกเบี้ยในประเทศตัวเอง มันก็ไม่ได้จะทำให้ชีวิตผู้คนนับล้าน ๆ ในประเทศของเราต้องตกอยู่ในอันตรายในทันทีเหมือนแต่ก่อน”
“มันแน่นอนว่า” คุณ Nabourema กล่าว “บิตคอยน์จะมาพร้อมปัญหาเหมือนนวัตกรรมอื่น ๆ ในอดีต แต่ความสวยงามของมันคือปัญหาเหล่านี้สามารถแก้ไขและพัฒนาขึ้นไปได้ด้วยความร่วมมือระดับนานาชาติอย่างสันติ ย้อนดูเมื่อ 20 ปีก่อน ไม่มีใครรู้หรอกว่าอินเทอร์เน็ตจะสามารถทำให้เราทำสิ่งมหัศจรรย์ที่เราทำอยู่ในทุกวันนี้ได้ และคงไม่มีใครบอกได้หรอกว่าในอีก 20 ปีข้างหน้า บิตคอยน์จะช่วยให้เราทำสิ่งมหัศจรรย์อะไรได้อีกบ้าง”
“หนทางข้างหน้าต่อจากนี้” เธอกล่าว “คือการตื่นรู้ของมวลชน พวกเขาจะเข้าใจว่าระบบการเงินในปัจจุบันทำงานอย่างไรได้แบบทะลุปรุโปร่ง และพวกเขาจะเข้าใจว่ามันมีทางเลือกอื่นอยู่ เราต้องไปอยู่ในจุดที่ผู้คนสามารถทวงคืนเสรีภาพให้ตัวเองได้ โดยที่ชีวิตของพวกเขาไม่ได้ถูกควบคุมโดยอำนาจรัฐที่สามารถยึดอิสรภาพของพวกเขาไปตอนไหนก็ได้โดยไม่ต้องรับผลกระทบใด ๆ ที่ตามมา พวกเรากำลังค่อย ๆ ไปถึงเป้าหมายนั้นด้วยบิตคอยน์”
“เนื่องจากเงินคือศูนย์กลางของทุกอย่างในโลกของเรา” คุณ Nabourema กล่าว “ความจริงที่ว่า ณ ตอนนี้เราสามารถมีอิสรภาพทางการเงินได้ถือเป็นสิ่งสำคัญต่อประชาชน และพวกเราก็กำลังพยายามทวงคืนสิทธิของเราเองในทุก ๆ พื้นที่และทุก ๆ อุตสาหกรรม”
ในการสัมภาษณ์สำหรับบทความนี้ ผู้สนับสนุนนโยบายเงินฝืด (Deflation) อย่างคุณ Jeff Booth ได้อธิบายว่าในขณะที่โลกยิ่งเดินเข้าสู่ระบบมาตรฐานบิตคอยน์ ธนาคารโลกและกองทุน IMF จะยิ่งไม่อยากเป็นเจ้าหนี้ และน่าจะอยากไปเป็นผู้ลงทุนร่วม เป็นหุ้นส่วน หรืออาจเป็นแค่ผู้จัดตั้งกองทรัสต์มากกว่า เนื่องจากในขณะที่ราคาของสิ่งต่าง ๆ เริ่มปรับตัวลง มันหมายถึงหนี้สินจะมีมูลค่าที่มากขึ้น และจะเป็นเรื่องยากขึ้นสำหรับลูกหนี้ที่จะหาเงินมาจ่ายคืนได้ และเมื่อเครื่องพิมพ์เงินของสหรัฐอเมริกาถูกปิดไป ก็จะไม่มีเงินให้กู้เพื่อมาช่วยเหลืออีก และเขายังชี้ว่าในช่วงแรกนั้นธนาคารโลกและกองทุน IMF จะยังพยายามที่จะปล่อยกู้เหมือนเดิม แต่มันจะเป็นครั้งแรกที่พวกเขาจะสูญเงินก้อนใหญ่นั้นไปจริง ๆ เพราะประเทศต่าง ๆ จะเริ่มผิดนัดชำระหนี้กันมากขึ้นในระหว่างที่โลกกำลังขยับเข้าสู่ระบบมาตรฐานบิตคอยน์
ดังนั้นพวกเขาอาจจะเริ่มพิจารณาการเป็นผู้ลงทุนร่วมมากกว่าจะเป็นผู้ปล่อยกู้ แปลว่าพวกเขาจะต้องสนใจความยั่งยืนและความสำเร็จที่แท้จริงของโครงการที่พวกเขาสนับสนุนมากขึ้น เนื่องจากพวกเขาต้องรับความเสี่ยงในการลงทุนนั้น
การขุดบิตคอยน์เป็นอีกสิ่งหนึ่งที่สามารถสร้างความเปลี่ยนแปลงได้
ถ้าชาติที่ยากจนสามารถเปลี่ยนทรัพยากรธรรมชาติของพวกเขาเป็นเงินได้ โดยไม่ต้องยุ่งเกี่ยวกับเหล่าผู้มีอำนาจจากต่างประเทศ บางทีอธิปไตยของพวกเขาอาจจะแข็งแกร่งขึ้น แทนที่จะถูกบ่อนทำลายลงเหมือนแต่ก่อน การขุดบิตคอยน์นั้นใช้ได้ทั้งพลังงานจากแม่น้ำ เชื้อเพลิงไฮโดรคาร์บอน แสงอาทิตย์ ลม ความร้อนใต้พิภพ หรือแม้แต่พลังงานจากความแตกต่างของอุณหภูมิน้ำทะเลนอกชายฝั่ง (OTEC) ที่มีอยู่มากมายในเขตตลาดเกิดใหม่ พลังงานเหล่านี้สามารถถูกเปลี่ยนเป็นสกุลเงินสำรองของโลกได้โดยตรง โดยไม่ต้องขออนุญาตใคร เรื่องนี้ไม่เคยเป็นไปได้มาก่อน ซึ่งสำหรับเหล่าประเทศที่ยากจนแล้วกับดักหนี้ดูเหมือนจะเป็นสิ่งที่หนีออกมาไม่ได้ แถมมันยังขยายขนาดขึ้นทุกปี
บางทีการลงทุนเพื่อเก็บออมบิตคอยน์เป็นเงินสำรอง และเพื่อใช้งานโครงสร้างพื้นฐานของบิตคอยน์ (ซึ่งโดยเนื้อแท้แล้วมันเป็นสิ่งที่ต่อต้านระบบเงินเฟียต) น่าจะเป็นทางออกและเป็นเส้นทางสู่การเอาคืนระบบเดิมที่กดขี่พวกเขามายาวนาน
คุณ Booth กล่าวว่าบิตคอยน์สามารถทำให้ระบบเก่าที่เอื้อประโยชน์แก่ประเทศร่ำรวยด้วยการปล้นค่าแรงของประเทศยากจนนั้นเกิดอาการลัดวงจรจนพังได้ เพราะในระบบเก่า กลุ่มประเทศชายขอบต้องถูกสังเวยเพื่อปกป้องประเทศแกนกลาง แต่ในระบบใหม่ ประเทศชายขอบและประเทศแกนกลางจะสามารถทำงานร่วมกันได้ เขากล่าวว่า ณ ตอนนี้ระบบเงินดอลลาร์สหรัฐกดคนให้จนลงด้วยการกดค่าแรงในประเทศชายขอบ แต่เมื่อเงินถูกปรับให้มีความเท่าเทียมด้วยระบบที่เป็นมาตรฐานและเป็นกลาง พลวัตใหม่จะถูกสร้างขึ้น การที่มีมาตรฐานทางการเงินเพียงหนึ่งเดียวจะส่งผลให้อัตราค่าแรงถูกดึงมาให้อยู่ในระดับใกล้เคียงกันอย่างเลี่ยงไม่ได้ แทนที่มันจะถูกทำให้ห่างออกจากกันแบบที่แล้วมา คุณ Booth ยังกล่าวอีกว่าเราไม่มีคำที่ใช้เรียกสำหรับพลวัตนี้ เพราะมันเป็นสิ่งที่ไม่เคยเกิดขึ้นมาก่อน โดยเขาแนะนำให้ใช้คำว่า “ความร่วมมือเชิงบังคับ”
คุณ Booth อธิบายถึงความสามารถของสหรัฐฯ ในการสร้างหนี้จำนวนเท่าไรก็ได้ตามต้องการว่าเป็น “การโจรกรรมในระดับฐานเงิน” คุณผู้อ่านคงอาจจะคุ้นเคยกับปรากฏการณ์แคนทิลอน (Cantillon Effect) ที่ผู้อยู่ใกล้แหล่งการผลิตเงินนั้นจะได้ประโยชน์จากเงินผลิตใหม่ ในขณะคนที่อยู่ห่างออกไปนั้นต้องทนทุกข์ใช่ไหมครับ ปรากฏการณ์แคนทิลอนในระดับโลกนั้นคือการที่ประเทศสหรัฐอเมริกาได้ประโยชน์จากการพิมพ์เงินสกุลสำรองของโลก ในขณะที่ประเทศยากจนต้องทนทุกข์กันทั้งโลก
“แต่มาตรฐานบิตคอยน์” คุณ Booth กล่าว “จะหยุดสิ่งนี้”
มีหนี้สินบนโลกมากแค่ไหนที่เป็นหนี้ที่ชั่วร้าย (Odious Debt) คำตอบคือมีเงินกู้นับล้านล้านดอลลาร์สหรัฐที่ถูกสร้างขึ้นตามอำเภอใจของผู้นำเผด็จการกับสถาบันการเงินที่อยู่เหนือประเทศชาติ และไม่มีใครเลือกตั้งพวกเขามาขึ้นมา หนี้ชั่วร้ายถูกสร้างขึ้นโดยไม่เคยถามความเห็นชอบของประชาชนในฝั่งของผู้กู้เลย สิ่งมีศีลธรรมที่ควรกระทำคือยกเลิกหนี้สินพวกนี้ให้หมด แน่นอนว่ามันจะไม่มีทางจะเกิดขึ้น เพราะสุดท้ายแล้วเงินให้กู้เหล่านี้ก็ถือเป็นทรัพย์สินบนงบดุลของผู้ที่เป็นเจ้าหนี้ของธนาคารโลกและกองทุน IMF ซึ่งพวกเขาอยากให้ทรัพย์สินพวกนั้นยังคงอยู่ และขอแค่ลูกหนี้ก่อหนี้ใหม่ไปชำระหนี้เก่าก็พอ
“พุทออปชั่น (Put Option)” ของ IMF ในหนี้ภาครัฐนั้นสร้างฟองสบู่ที่ใหญ่ที่สุดเท่าที่เคยมีมา มันใหญ่กว่าฟองสบู่ดอตคอม ฟองสบู่สินเชื่อซับไพรม์ และใหญ่กว่าฟองสบู่ที่เกิดจากเงินกระตุ้นเศรษฐกิจช่วง COVID-19 การจะรื้อระบบเก่านี้เป็นงานที่เจ็บปวดมาก แต่มันก็เป็นสิ่งที่ถูกต้องที่ควรทำ ถ้าเปรียบว่าหนี้สินคือยาเสพติด ธนาคารโลกและกองทุน IMF คือพ่อค้ายา และรัฐบาลของประเทศที่กำลังพัฒนาคือคนที่ติดยา ก็คงเห็นว่าไม่น่าจะมีฝ่ายไหนที่อยากให้อีกฝ่ายนึงหยุด แต่ที่การจะรักษาผู้ติดยานั้น เราต้องนำผู้ติดยาเข้ารับการบำบัด ระบบเงินเฟียตไม่อนุญาตให้ทำให้สิ่งนี้ได้ แต่ระบบการเงินมาตรฐานบิตคอยน์จะบังคับให้ผู้ป่วยติดยาไม่มีทางเลือกอื่น นอกจากการเลิกยา
หมายเหตุผู้แปล : - ฟองสบู่ดอตคอม (Dot-com Bubble) เป็นฟองสบู่จากการเก็งกำไรในตลาดหลักทรัพย์ภาคเทคโนโลยีของสหรัฐอเมริกา ในช่วงปี 1997-2000 - ฟองสบู่สินเชื่อซับไพรม์ (Subprime Mortgage Bubble) เป็นฟองสบู่จากเกิดจากการปล่อยกู้ในภาคอสังหาริมทรัพย์ของของสหรัฐอเมริกาซึ่งกลายเป็นวิกฤติการเงิน ในช่วงปี 2008
อย่างที่อาจารย์เซเฟดีน อัมมูสกล่าวในการสัมภาษณ์สำหรับบทความนี้ ว่าทุกวันนี้หากผู้ปกครองของบราซิลต้องการกู้เงินจำนวน 30,000 ล้านดอลลาร์สหรัฐ และรัฐสภาของสหรัฐฯ อนุมัติเงินกู้ดังกล่าวแล้ว ทางสหรัฐฯ ก็สามารถดีดนิ้วเสกเงินขึ้นมาและส่งให้ได้เลยผ่าน IMF แปลว่าระบบการเงินทุกวันนี้เป็นเรื่องของการตัดสินใจทางการเมืองเป็นหลัก แต่ถ้าเรากำจัดเครื่องพิมพ์เงินออกไปได้ การตัดสินใจเหล่านี้จะเป็นเรื่องการเมืองน้อยลง และจะเริ่มเป็นการตัดสินใจอย่างรอบคอบ แบบเดียวกับธนาคาร “ที่สำเหนียกว่าจะไม่มีใครมาอุ้มพวกเขาได้อีกแล้ว” จะพึงกระทำ
ในช่วง 60 ปีที่ผ่านมาของการครอบงำโดยธนาคารโลกและกองทุน IMF เหล่าทรราชและเครือข่ายที่ยักยอกความมั่งคั่งของประเทศจำนวนนับไม่ถ้วน ต่างก็ได้รับเงินให้กู้เพื่อการช่วยเหลือ (ซึ่งเป็นสิ่งที่ขัดกับสามัญสำนึกทางการเงิน) เพื่อให้เหล่าประเทศแกนกลางสามารถเข้าฉกฉวยทรัพยากรธรรมชาติและแรงงานของประเทศเหล่านั้นได้ต่อไป สิ่งเหล่านี้เป็นไปได้เพราะรัฐบาลซึ่งเป็นหัวใจของระบบเฟียตสามารถพิมพ์เงินที่เป็นสกุลเงินสำรองได้ แต่ในระบบมาตรฐานบิตคอยน์นั้น คุณเซเฟดีน อัมมูสตั้งคำถามว่าใครกันที่จะปล่อยเงินกู้ความเสี่ยงสูงมูลค่านับพันล้านดอลลาร์ เพียงเพื่อจะแลกกับการให้ลูกหนี้เข้ามาขอปรับโครงสร้างเพื่อยืดหนี้เก่าโปะหนี้ใหม่ในภายหลัง
“เป็นคุณจะปล่อยกู้งั้นเหรอ?” เขาถาม “แล้วจะเอาบิตคอยน์ของใครไปปล่อยกู้ล่ะ?”
⚡️ กด Zap ที่ลิงก์นี้ เพื่อเป็นกำลังใจทีมงานผู้เรียบเรียงบทความ
(ทุกยอด zap จะถูกแบ่งอัตโนมัติเข้าวอลเล็ทของผู้เขียนบทความต้นฉบับภาษาอังกฤษ, ผู้แปลดราฟต์ 1, ผู้เรียบเรียงดราฟต์ 2-3, กองบรรณาธิการและพิสูจน์อักษรจากไรท์ชิฟต์ พร้อมกันบางส่วนไว้เพื่อเป็นค่าธรรมเนียมธุรกรรม)
-
@ 52b4a076:e7fad8bd
2023-11-10 23:08:02Nostr has a funding problem. Developers and infrastructure is severely underfunded and reliant on flawed economic models, and this could pose a risk to the future of Nostr as a protocol.
To understand it, we need to understand what resources are needed to make Nostr work, how they aren't funded properly and what could happen next.
The costs of what makes Nostr work
Nostr works because: 1. Developers build clients on it 2. There is infrastructure to support clients
If there are no developers, there are no clients. If there is no infrastructure, clients have no purpose.
The costs of developing a client, and the reliance on developers
Clients require time to develop the client, and money to run infrastructure for it.
Without developers, there would be no clients, and no Nostr.
Developers have lives and need to make money somehow in exchange for the time they spend. Developing a client is a significant cost, even for small ones (assuming 1 hr/day, no infrastructure costs and the average salary of a software developer, $1500) and needs to be covered somehow.
We have multiple models, all of which have large downsides: 1. Donations/V4V/Bounties:
This model suffers from the problem that a minority pays for the majority, which will lead to the majority demanding exclusive benefits for their money or otherwise cutting off funding since they have no reason to pay.
This also suffers from the fact that donations are unreliable. 2. Grants from OpenSats and similar non-profits:
These suffer from the same problems as the donations model, but also suffer from the following problems: - the managers of these non-profits may have views not aligned with their donors, leading to misfunding. - that such projects are mostly stopgaps that add additional complexity to a direct donation model. 3. Paywalled features:
It is very hard to find the balance between paywalling enough features to make money, and discouraging too many users from using the client. There may not even be such a sweet spot. 4. Cuts: It is extremely hard to balance these so that people don't complain, and it is likely that there will be forks of FOSS clients that remove these features by some users.
5. Paid clients:
People do not want to spend a lot on services that they expect to be free or cheap, and spending $5/month on this client, $10/month on that client, so on won't scale, even though that is way less than the actual value they are getting. 6. Ads:
Ads are usually underpaying and mostly make money for the ad companies instead of the client developers. Ads are also an invasion of privacy and may not be well received by some users. 7. VC funding:
VCs put profit above protocol health, which may accelerate some issues that I will discuss later. They also may disincentivize the development of some apps (uncensored social media for example) for pushing their own agenda.Even if we find some good way to fund clients, it doesn't end there...
The cost of infrastructure
There are multiple types of infrastructure for Nostr, such as relays, services like Noswhere's search relay, push notifications, etc. All of these cost money to operate, and are the other half of what make Nostr work.
These have even more limited funding options, which have even bigger downsides: 1. Client funding:
Clients already struggle on funding as I discussed in the previous section. This would mean infrastructure is even more underfunded. 2. User payments:
Users do not understand the details and importance of infrastructure, and have no reason to fund it. Making this problem worse, infrastructure providers can falsely advertise their services, diverting money away from infrastructure that is higher quality and should be funded. This is already happening. 3. Grants from OpenSats and similar non-profits (relays only):
Again, these suffer from problems specified in the last section about grants. These entities will likely want the highest value from their donations, therefore leading them to encouraging a few big relays than many medium sized ones. Since relays are more important infrastructure, and they could have more control, these entities can also exert more control over the network. 4. Data harvesting and selling:
This would discourage people from using their providers, but this is likely going to happen to some extent. The issue is that it would not generate sufficient revenue for the amount of users it will drive away.Both infrastructure and developers being underfunded can lead to issues that may kill the protocol, which I'll discuss in the next section.
The risks of improper funding
1. The protocol fizzles out and dies without reaching critical mass
This is one of the less likely options since there will probably be people developing for the sake of it, but is likely. With client developers being underfunded and infrastructure shutting down, Nostr would become smaller and worse to use until it completely fizzled out except maybe a few people.
2. The enshittification of Nostr
This is the most likely outcome, and the worst one. As Nostr continues developing, developers and infrastructure developers will want to maximize revenue, so they will begin by making good products to attract users at a loss.
After they have a sufficiently large user base, they would slowly erode bridges to their competitors, only leaving what is required so that their users won't complain.
After this stage, it is likely that clients will start merging with other ones to make larger "everything" apps and kill the last bridges, turning them into proprietary walled gardens, returning us to where we are today.
How do we fix this?
I have no idea. Please share your opinions if you do :)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A entrevista da Flávia Tavares com o Olavo de Carvalho
Não li todas as reclamações que o Olavo fez, mas li algumas. Também não li toda a matéria que saiu na Época, porque não tive paciência, mas assisti aos dois vídeos da entrevista que o Olavo publicou.
Tendo lido primeiro as muitas reclamações do Olavo, esperei encontrar no vídeo uma pessoa falsa, que fingiu-se de amigável para obter informações que usaria depois para destruir a imagem do Olavo, mas não vi nada disso.
Claro que ela poderia ter me enganado também, se enganou ao Olavo. Mas na matéria em si, também não vi nada além de sinceridade -- talvez não excelência jornalística, mas nada que eu não esperasse de qualquer matéria de qualquer revista. Flavia Tavares não entendeu muitas coisas, mas não fingiu que não entendeu nada, foi simples e honestamente Flavia Tavares, como ela mesma declarou no final do vídeo da entrevista: "olha, eu não fingi nada aqui, viu?".
O mais importante de tudo isso, porém, são as partes da matéria que apresentam idéias difíceis de conceber, como as que Olavo tem sobre o governo mundial ou a disseminação da pedofilia. Em toda discussão pública ou privada, essas idéias são proibidas. Muita gente pode concordar que a esquerda não presta, mas ninguém em sã consciência admitirá a possibilidade de que haja qualquer intenção significativa de implantação de um governo mundial ou da disseminação da pedofilia. A mesma carinha de deboche que seu amigo esquerdista faria à simples menção desses assuntos é a que Flavia Tavares usa no seu texto quando quer mostrar que Olavo é meio tantã. A carinha de deboche vem desacompanhada de qualquer reflexão séria ou tentativa de refutação, sempre.
Link da tal matéria: http://epoca.globo.com/sociedade/noticia/2017/10/olavo-de-carvalho-o-guru-da-direita-que-rejeita-o-que-dizem-seus-fas.html?utm_source=twitter&utm_medium=social&utm_campaign=post Vídeos: https://www.youtube.com/watch?v=C0TUsKluhok, https://www.youtube.com/watch?v=yR0F1haQ07Y&t=5s
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lightning and its fake HTLCs
Lightning is terrible but can be very good with two tweaks.
How Lightning would work without HTLCs
In a world in which HTLCs didn't exist, Lightning channels would consist only of balances. Each commitment transaction would have two outputs: one for peer
A
, the other for peerB
, according to the current state of the channel.When a payment was being attempted to go through the channel, peers would just trust each other to update the state when necessary. For example:
- Channel
AB
's balances areA[10:10]B
(in sats); A
sends a 3sat payment throughB
toC
;A
asksB
to route the payment. ChannelAB
doesn't change at all;B
sends the payment toC
,C
accepts it;- Channel
BC
changes fromB[20:5]C
toB[17:8]C
; B
notifiesA
the payment was successful,A
acknowledges that;- Channel
AB
changes fromA[10:10]B
toA[7:13]B
.
This in the case of a success, everything is fine, no glitches, no dishonesty.
But notice that
A
could have refused to acknowledge that the payment went through, either because of a bug, or because it went offline forever, or because it is malicious. Then the channelAB
would stay asA[10:10]B
andB
would have lost 3 satoshis.How Lightning would work with HTLCs
HTLCs are introduced to remedy that situation. Now instead of commitment transactions having always only two outputs, one to each peer, now they can have HTLC outputs too. These HTLC outputs could go to either side dependending on the circumstance.
Specifically, the peer that is sending the payment can redeem the HTLC after a number of blocks have passed. The peer that is receiving the payment can redeem the HTLC if they are able to provide the preimage to the hash specified in the HTLC.
Now the flow is something like this:
- Channel
AB
's balances areA[10:10]B
; A
sends a 3sat payment throughB
toC
:A
asksB
to route the payment. Their channel changes toA[7:3:10]B
(the middle number is the HTLC).B
offers a payment toC
. Their channel changes fromB[20:5]C
toB[17:3:5]C
.C
tellsB
the preimage for that HTLC. Their channel changes fromB[17:3:5]C
toB[17:8]C
.B
tellsA
the preimage for that HTLC. Their channel changes fromA[7:3:10]B
toA[7:13]B
.
Now if
A
wants to trickB
and stop respondingB
doesn't lose money, becauseB
knows the preimage,B
just needs to publish the commitment transactionA[7:3:10]B
, which gives him 10sat and then redeem the HTLC using the preimage he got fromC
, which gives him 3 sats more.B
is fine now.In the same way, if
B
stops responding for any reason,A
won't lose the money it put in that HTLC, it can publish the commitment transaction, get 7 back, then redeem the HTLC after the certain number of blocks have passed and get the other 3 sats back.How Lightning doesn't really work
The example above about how the HTLCs work is very elegant but has a fatal flaw on it: transaction fees. Each new HTLC added increases the size of the commitment transaction and it requires yet another transaction to be redeemed. If we consider fees of 10000 satoshis that means any HTLC below that is as if it didn't existed because we can't ever redeem it anyway. In fact the Lightning protocol explicitly dictates that if HTLC output amounts are below the fee necessary to redeem them they shouldn't be created.
What happens in these cases then? Nothing, the amounts that should be in HTLCs are moved to the commitment transaction miner fee instead.
So considering a transaction fee of 10000sat for these HTLCs if one is sending Lightning payments below 10000sat that means they operate according to the unsafe protocol described in the first section above.
It is actually worse, because consider what happens in the case a channel in the middle of a route has a glitch or one of the peers is unresponsive. The other node, thinking they are operating in the trustless protocol, will proceed to publish the commitment transaction, i.e. close the channel, so they can redeem the HTLC -- only then they find out they are actually in the unsafe protocol realm and there is no HTLC to be redeemed at all and they lose not only the money, but also the channel (which costed a lot of money to open and close, in overall transaction fees).
One of the biggest features of the trustless protocol are the payment proofs. Every payment is identified by a hash and whenever the payee releases the preimage relative to that hash that means the payment was complete. The incentives are in place so all nodes in the path pass the preimage back until it reaches the payer, which can then use it as the proof he has sent the payment and the payee has received it. This feature is also lost in the unsafe protocol: if a glitch happens or someone goes offline on the preimage's way back then there is no way the preimage will reach the payer because no HTLCs are published and redeemed on the chain. The payee may have received the money but the payer will not know -- but the payee will lose the money sent anyway.
The end of HTLCs
So considering the points above you may be sad because in some cases Lightning doesn't use these magic HTLCs that give meaning to it all. But the fact is that no matter what anyone thinks, HTLCs are destined to be used less and less as time passes.
The fact that over time Bitcoin transaction fees tend to rise, and also the fact that multipart payment (MPP) are increasedly being used on Lightning for good, we can expect that soon no HTLC will ever be big enough to be actually worth redeeming and we will be at a point in which not a single HTLC is real and they're all fake.
Another thing to note is that the current unsafe protocol kicks out whenever the HTLC amount is below the Bitcoin transaction fee would be to redeem it, but this is not a reasonable algorithm. It is not reasonable to lose a channel and then pay 10000sat in fees to redeem a 10001sat HTLC. At which point does it become reasonable to do it? Probably in an amount many times above that, so it would be reasonable to even increase the threshold above which real HTLCs are made -- thus making their existence more and more rare.
These are good things, because we don't actually need HTLCs to make a functional Lightning Network.
We must embrace the unsafe protocol and make it better
So the unsafe protocol is not necessarily very bad, but the way it is being done now is, because it suffers from two big problems:
- Channels are lost all the time for no reason;
- No guarantees of the proof-of-payment ever reaching the payer exist.
The first problem we fix by just stopping the current practice of closing channels when there are no real HTLCs in them.
That, however, creates a new problem -- or actually it exarcebates the second: now that we're not closing channels, what do we do with the expired payments in them? These payments should have either been canceled or fulfilled before some block x, now we're in block x+1, our peer has returned from its offline period and one of us will have to lose the money from that payment.
That's fine because it's only 3sat and it's better to just lose 3sat than to lose both the 3sat and the channel anyway, so either one would be happy to eat the loss. Maybe we'll even split it 50/50! No, that doesn't work, because it creates an attack vector with peers becoming unresponsive on purpose on one side of the route and actually failing/fulfilling the payment on the other side and making a profit with that.
So we actually need to know who is to blame on these payments, even if we are not going to act on that imediatelly: we need some kind of arbiter that both peers can trust, such that if one peer is trying to send the preimage or the cancellation to the other and the other is unresponsive, when the unresponsive peer comes back, the arbiter can tell them they are to blame, so they can willfully eat the loss and the channel can continue. Both peers are happy this way.
If the unresponsive peer doesn't accept what the arbiter says then the peer that was operating correctly can assume the unresponsive peer is malicious and close the channel, and then blacklist it and never again open a channel with a peer they know is malicious.
Again, the differences between this scheme and the current Lightning Network are that:
a. In the current Lightning we always close channels, in this scheme we only close channels in case someone is malicious or in other worst case scenarios (the arbiter is unresponsive, for example). b. In the current Lightning we close the channels without having any clue on who is to blame for that, then we just proceed to reopen a channel with that same peer even in the case they were actively trying to harm us before.
What is missing? An arbiter.
The Bitcoin blockchain is the ideal arbiter, it works in the best possible way if we follow the trustless protocol, but as we've seen we can't use the Bitcoin blockchain because it is expensive.
Therefore we need a new arbiter. That is the hard part, but not unsolvable. Notice that we don't need an absolutely perfect arbiter, anything is better than nothing, really, even an unreliable arbiter that is offline half of the day is better than what we have today, or an arbiter that lies, an arbiter that charges some satoshis for each resolution, anything.
Here are some suggestions:
- random nodes from the network selected by an algorithm that both peers agree to, so they can't cheat by selecting themselves. The only thing these nodes have to do is to store data from one peer, try to retransmit it to the other peer and record the results for some time.
- a set of nodes preselected by the two peers when the channel is being opened -- same as above, but with more handpicked-trust involved.
- some third-party cloud storage or notification provider with guarantees of having open data in it and some public log-keeping, like Twitter, GitHub or a Nostr relay;
- peers that get paid to do the job, selected by the fact that they own some token (I know this is stepping too close to the shitcoin territory, but could be an idea) issued in a Spacechain;
- a Spacechain itself, serving only as the storage for a bunch of
OP_RETURN
s that are published and tracked by these Lightning peers whenever there is an issue (this looks wrong, but could work).
Key points
- Lightning with HTLC-based routing was a cool idea, but it wasn't ever really feasible.
- HTLCs are going to be abandoned and that's the natural course of things.
- It is actually good that HTLCs are being abandoned, but
- We must change the protocol to account for the existence of fake HTLCs and thus make the bulk of the Lightning Network usage viable again.
See also
- Channel
-
@ 3bf0c63f:aefa459d
2023-11-08 15:06:55I think it's undeniable at this point that the creation of Lightning Address has made the usage of custodial Lightning solutions increase -- or at least be more prominent. We see these stats everywhere of how many of Nostr Lightning Addresses are custodial, and how services like Nodeless and Geyser only work if you give them a Lightning Address, and that probably sounds like a lament to many people who thought Lightning would be this amazing network of self-hosted nodes that everybody runs, and is indeed sad.
Well, on the other hand the Lightning Address flow is clearly an improvement -- to most people -- over the cumbersome invoice flow. Since the early days of Lightning people had been complaining about the fact that there is no way to "just send money to an address". Even though I personally liked the invoice flow much more (especially if they had structured, signatures, clear descriptions and amounts, payment proofs and so on) the fact is that most people didn't and slowly invoices were losing their meaning entirely anyway.
This improvement in the payment flow, along with the open, easy and interoperable way that Lightning Addresses work, has opened space for new use cases that maybe wouldn't have existed otherwise -- like the services mentioned above, and others. So we can't really say this was all a bad thing.
We also shouldn't say it was a bad thing to have Lightning Addresses being invented at all, because if they hadn't been invented like they were, there is a high probably they would have been invented in other forms, probably much worse, that would involve private deals and proprietary integrations with ad-hoc APIs, SDKs and JavaScript widget buttons with iframes. I even remember some of these things starting to happen at the time Lightning Address was created, and we have more evidence of these things even after Lightning Address, like some "partnerships" here and there and the "UMA" protocol. So maybe Lightning Address was just the best possible protocol at the best time, and all its problems are not really its problems, but symptons of the problems of the Lightning Network (or maybe you wouldn't call these "problems", just natural properties, but doesn't matter).
The saddest realization of all this process, for me, was that Lightning payments are mostly used for tipping and not for commerce as I thought they would in the beginning (hence my love for the invoice flow). Specifically, LNURL-pay and its cool hidden features that went mostly unsupported, were all designed with the goal of enabling new use cases for commerce in the real life, outside the web, and Lightning Addresses would have tied nicely into that vision too -- but that was all definitely an irredeemable failure.
I thought I had more things to say about this, but either I didn't or I forgot. The end.
-
@ 2edbcea6:40558884
2023-11-05 18:23:21Happy Sunday #Nostr !
Here’s your #NostrTechWeekly newsletter brought to you by nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk written by nostr:npub1r3fwhjpx2njy87f9qxmapjn9neutwh7aeww95e03drkfg45cey4qgl7ex2
The #NostrTechWeekly is a weekly newsletter focused on the more technical happenings in the nostr-verse.
Before we dive in I want to congratulate the winners of the NostrAsia hackathon: eNuts, Shopstr, Zappdit, and the runner up Nosskey. Thanks for hacking away and building for the community. 😊
Let’s dive in!
Recent Upgrades to Nostr (AKA NIPs)
1) (Proposed) Updates to NIP 72: Moderated Communities
As mentioned in previous weeks, moderated communities as currently outlined in NIP 72 publish kind 1 events (tweet-like text notes), and they show up without context in clients like Damus, Amethyst, Snort, etc.
One solution to the lack of context is to make activities in moderated communities use event kinds other than 1 so that you have to be in a client specialized for moderated communities to see those activities.
There have been a few attempts to update NIP 72 to accomplish that goal, and this is the latest iteration. Unique to this proposal is the idea of community scoped user data, not just posts. For example you could follow people within the community within the scope of the community but not add them to your general following list for your whole Nostr account. They’d show up in your feed in the moderated community but not in your general feed.
There’s a lot of potential in moderated communities to become a core pillar of Nostr usage as the network scales, especially since the censorship and API management regimes of Reddit have gotten more restrictive. People want better and Nostr can be that solution.
Author: nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6
2) (Proposed) NIP 49: Nostr Wallet Auth
Having a Lightning Wallet connected to Nostr is a powerful tool for content monetization and nostr-based marketplaces. There’s a lot of great work done on Nostr Wallet Connect, but it all “hinges on having the user copy-paste a wallet connection URI into the app they wish to connect with. This can be a UX hurdle and often has the user handling sensitive information that they may not understand.”
“This NIP proposes a new protocol that solves these problems by having the wallet and app generate a NWC connection URI together. This URI is then used to connect the wallet and app.”
Making it easier for new users (especially those that are less technical) to participate in the new economy is fundamental to making Nostr the best place for content creators to make a living without middlemen and heavy handed platforms. 💪
Author: nostr:npub1u8lnhlw5usp3t9vmpz60ejpyt649z33hu82wc2hpv6m5xdqmuxhs46turz
3) (Proposed) Updates to NIP 15: Nostr-based Marketplaces
This proposal expands the capabilities of Nostr-based Marketplaces to support auctions. Think about a Nostr-based eBay where users can post an item for sale by auction using a marketplace Nostr client.
The auction going live is a Nostr event, and users can publish bids as Nostr events, and when the auction is closed it’ll be determined by the bid event with the highest offering. Which anyone can verify. This is a very interesting way to make marketplaces more censorship resistant.
Author: ibz
Notable Projects
Yondar (social maps) 🌐
Much physical commerce is discovered by consumers via map apps. You go on Google or Apple Maps to find an art supply store near you, or coffee shops that are open at 2pm when you’re in a new city. These centralized solutions have dominated, but no more! Enter Yondar.
Yondar is a Nostr client that allows users to publish places on a map. This could be places of business or events, really anything that has a location.
True to Nostr form, people can also socially interact with these places: do a kind 1 comment in response to an upcoming event published on Yondar. Or someone could create a Nostr client for restaurant reviews and publish one of those in response to a place published via Yondar. The possibilities are endless 🤯.
Author(s): npub1arkn0xxxll4llgy9qxkrncn3vc4l69s0dz8ef3zadykcwe7ax3dqrrh43w
WavLake 🎵
WavLake is a music player for web and mobile. It’s part Spotify music player and music discovery tool, part music monetization platform. They host the actual music files in their own cloud hosted system, but they publish Nostr events to represent the content that’s stored by WavLake, so other clients could help people discover music hosted and published via WavLake.
If musicians can get paid more for their music via solutions like WavLake, they will start using Nostr purely for selfish monetary reasons. This could be the start of purple-pilling that particular artistic community.
P.S. I listened to WavLake Radio while writing this, it was nice writing music.
Authors: based on the linked Github repo it’s nostr:npub1j0shgumvguvlsp38s49v4zm8algtt92cerkwyeagan9m6tnu256s2eg9a7 and blastshielddown
Opt-in content moderation on nos.social 👍👎
nostr:npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240 gave a presentation at NostrAsia about making Nostr more accessible for normies. One of the items that was announced was the implementation of opt-in content moderation in his Nostr client nos.social.
They now support content warnings set by users publishing content, reporting content with various tags, as well as the ability for users to opt-in to moderation that changes their feed based on what they want to filter out.
People that don’t want moderation don’t need to use it, but I applaud nos.social’s work to develop these tools and patterns so that people have the choice to moderate their own feed.
Latest conversations: Interoperability
NostrAsia really highlighted the superpowers of Nostr and one of the foremost is interoperability. nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft gave a great talk about this topic and explained NIP 31 and NIP 89.
In summary, imagine there’s a Nostr client for movie reviews and they publish the review in a specific structure with Kind 12345678. Most Nostr clients will see that event but won’t know how to display that content or help users interact with it.
If clients adopt NIPs 31 and 89 then Nostr events will have an “alt” tag and a way to find a suggested app to handle that type of Nostr content. The alt tag helps clients display something to the user and the suggested app handler helps the client indicate to the user how to interact with that content more natively.
Maximum interoperability is useful in allowing Nostr to grow without centralizing. We can all experiment and build and all efforts still support the usage of the Nostr ecosystem in general.
Kind 1 clients as gateways to the Nostr app ecosystem
This interoperable future points to a framework where Kind 1 clients (Damus, Amethyst, Snort, Primal, etc) are the entrypoint or gateway to the Nostr ecosystem. Kind 1 clients are the core social use case and the foundation of most social interaction and could be amazing standalone apps and businesses, but Nostr makes them something more.
People will post Kind 1 notes referencing all kinds of Nostr events: music published via WavLake or Stemstr, places shared via Yondar, recipes created on Nostr.Cooking, etc. Clients will display some context on what the comment is on (music, place, video, recipe, etc) and then help the user find the right plugin/app/website to use to interact with that content natively.
This makes the Kind 1 clients the purple pill. This is only possible because of the unique interoperability capabilities of Nostr.
Interoperability with non-Nostr services
Even more amazing about Nostr is that it’s fairly easy to make the network interoperable with services outside the Nostr-verse. We already have pretty good bridges to ActivityPub-based systems like Mastadon as well as Twitter (with Exit). That reduces the switching cost for people that want to explore Nostr and eventually make the switch.
Even within Nostr-verse we have services like nostr.build that provide services in a way that doesn’t lock users in. Nostr.build provides file hosting services and because they follow NIP 96, there’s a standard way for clients to interact with nostr.build as a user’s preferred file storage provider for pictures and videos. It is easy for users to switch providers and clients can easily pick up the change. Efforts like NIP 96 makes file storage much more interoperable across the Nostr-verse without centralization.
Continuing to build solutions that allow users to have freedom AND a great experience will make Nostr the best game in town.
Don’t reinvent the wheel: use ISO when possible
I’m not sure if every developer knows this, but there’s an International Standards Organization (ISO) that has done a ton of work to promote standards of all kinds. The most common one developers have all used is the ISO datetime format. No matter what language or database you’re using, you’re likely representing timezone aware datetimes in the ISO format so that you can pass that data around safely.
ISO has codified standards for everything : how to store medical history, how to interact with USB, even stuff as obscure as how to represent data around holograms
Nostr is doing a lot of work to define structure to data that will be stored as Nostr events. We’re essentially inventing standards for social-related data of all kinds.
There are some things we’re building support for in Nostr, and I will encourage folks to see if there’s an ISO standard to follow first. It may save you a ton of time and brain power. On top of that, adopting ISO standards makes Nostr more compatible with future development that we can’t even imagine yet.
Invest in interoperability to support the cause
The Nostr-verse is under construction and the best time to plan for interoperability is before the building is done. Devs have done a great job retaining interoperability as a cambrian explosion of Nostr development has taken place. I hope we can continue to invest in interoperability because it may just be the thing that makes Nostr able to take on the giants of Big Tech when the time is right.
Until next time 🫡
If you want to see something highlighted, if we missed anything, or if you’re building something we didn’t post about, let us know. DMs welcome at nostr:npub19mduaf5569jx9xz555jcx3v06mvktvtpu0zgk47n4lcpjsz43zzqhj6vzk
Stay Classy, Nostr.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On HTLCs and arbiters
This is another attempt and conveying the same information that should be in Lightning and its fake HTLCs. It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
- Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
- Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
- MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28jq-finder
Made with jq-web, a tool to explore JSON using
jq
queries that build intermediate results so you can inspect each step of the process. -
@ df67f9a7:2d4fc200
2023-11-03 21:28:18the following article is a repost from my recent issue #864 posted in the NIPs repository
Nostr is a protocol for events. Sending user generated content to a relay should be one kind of event (a
kind 1
event) regardless of the structure of the content being sent. Different “types” of user generated content (EG:microblog-post
,microblog-poll
,encrypted-dm
,blog-article
,photoblog-post
,calendar-event
,exchange-question
,exchange-answer
, ect…) should not require different event kinds. Content type definitions (describing structured content with supported and required tags) should be maintained freely by implementers, not controlled by protocol. By opening up (decentralizing) the architecture for typed content, a potential source of NIP bloat will be “nipped” and the creativity of nostr designers and developers will be released to organically evolve novel solutions for real world content needs.The protocol for clients and relays to define and communicate about typed content (including supported and required tags for each type) should be a new NIP (see below for draft). New content types may also be defined by individual NIPs, but these (both existing and proposed NIPs) should only prescribe the “minimal” structure and processing (of supported and required tags) needed for compliance. In the wild, content type implementations should be defined and updated “ad hoc” between clients and relays.
==Free the “other stuff”!==
Use Case Example
- Client “NotX” implements three content types :
microblog-post
,encrypted-dm
, andmicroblog-poll
. - The first is a stock NIP implementation. The second is a NIP content type with an added
location
tag. The last one is a custom content type being tested by the client. - These content types are defined locally by the client. Their
kind
anddescription
provide the basis for easily identifying the client’s IRL use case. - Each content type definition also includes a list of supported and required
tags
EG: Themicroblog-poll
content type haspoll-topic
,poll-options
, andclose-date
as required tags, andpublish-date
as a supported tag. - These content type definitions also describe the data expected and other properties for each tag.
- Upon WSS connection (during the “handshake” phase) the client sends it’s list of supported content types to the relay for comparison to it’s “local master” list of supported types. Any discrepancies in the list of
tags
for a given content type are either accepted (as updates) or rejected by the relay. - Relays dictate for themselves which content types (and which tags for each type) they will accept from a given client.
- The connected relay A has been updated to work with the
location
tag forencrypted-dm
content type, (it’s a common NIP mod these days) and will accept kind 1 events for this type. - Relay A rejects the novel
microblog-poll
content type, but logs the client request for future updates. - The client connects to Relay B (who’s operator is working closely with the client developer) to support the novel content type. Relay B has been updated already, and will accept Kind 1 events for the
microblog-poll
content type.
NIP Draft : Kind 1 User Generated Content Type
A NIP proposal (PR) will be forthcoming, depending on the discussions in this issue. Here are some possible highlights of what such a NIP would specify:
User Generated Content
User generated content refers to personally authored text or media content (sent as event data) which human users wish to “publish” to relays for consumption by other human users.
User generated content does NOT include: - events describing an action (EG: boost, like, vote, edit, or delete) that a user may take on already published content. - event data for user profile content - event data for reward or payment systems (EG: badges created or awarded, or zaps made to users or events). - event data that defines or configures a “grouping” of other users or events (EG: “lists”, or “communities”). - event data intended for consumption by an external API (EG: DVMs, ai bots, IOT, ect…) - event data that simply “adds function” to existing event kinds (EG: event encryption or replaceability).
NIPs Prescribing User Generated Content Types
NIPs should define new user generated content types as a
kind 1
event (rather than a new event kind). Each content type will have atype
anddescription
that clearly illustrates its use case or intended “style” of app. NIPs will also suggest the minimaltags
(content structure) and processing rules required for implementations to be NIP compliant.In addition NIPs may propose a new
tag
(and rules for processing it) that may be added to any (or a specific) content type. (For example: “gift wrap” encryption and “replaceable event” may become tags for events, rather than event kinds themselves).Supported Content Types via Handshake
Client implementers will send a list of supported content types as part of a “handshake” document when web socket is first connected.
Format for content type list:
{... “content”:<array[object]>, ... }
Format for defining a supported content type :
{ “kind” : 1, “type”:<string>, “description”:<string>, “tags”:<array[array]>, “legacy_kind”:<number> }
Format for supported tags in a content type definition:
[<tag_id>,<default_value>,<data_type>,<is_required>]
Kind 1 Content Typed Events
All user generated content should be a
kind 1
“content typed” event. The content type will be communicated via thec
tag (as opposed to a new eventkind
number). Structured content will be conveyed using standard tags in the event object, as defined in the handshake document for this content type.{ "id": <event_id>, "pubkey": <user_pubkey>, "created_at": <event_timestamp>, "kind": 1, "tags": [ [“c”, <content_type>], [<tag>,<tag_value>], ... ], "content": <note_content>, "sig": <event_sig> }
Update Path for Implementers
For backwards compatibility with legacy clients and relays, implementers should deprecate but not remove legacy “event kind content” event handling. As the implementation code is updated to support “content typed events”, the “supported types” list in the “handshake” document will reference the
legacy_kind
kind number that the legacy client or relay will be expecting. Using the info, implementors can send the event in the legacy “event kind content” format as needed. (If nostr supports log output via event tags, this can be used as well to inform of possible updates.)Related NIPs & Issues
These NIPs will Need Updating
(to redefine prescribed event kinds as Kind 1 content types, freeing clients and relays to update them as needed “in the wild” )
NIP-4 : Encrypted Direct Message NIP-18 : Reposts NIP-23 : Long form Content NIP-28 : Kind-42 : Channel Message NIP-52 : calendar events NIP-72 : community post NIP-99 : classified listing
These NIPs play well with content typed events
(by defining tags for events rather than new event kinds.)
NIP-14 : Subject Tags in Text Events NIP-32 : Labeling NIP-36 : Sensitive Content NIP-48 : Proxy Tags NIP-72 : community post
These PRs will need tweaking :
848 using multiple independent kinds for community scoped events
814 Internet on things on nostr (nip 107)
817 IOT sensors and intent
811 decentralized web hosting
787 draft of nip-34 : decentralized wikis
These Issues will be closer to resolving:
162 Decentralizing the NIPs registry.
394 add content-type tag for encrypted direct messages.
520 Different nostr: schemes for different types of apps/nips
583 nip-23 used for private journals
Conclusion
This architecture of “customizable content types” is not new. The best CMSs for over a decade have relied on this to allow website administrators to create new types of content (“pages”) with custom elements (“fields”) to suit their business needs. Indeed, some go even further and abstract other “entities” (not limited to user generated content) to use this modular “field-able” architecture.
Nostr can take this modular content architecture to the next level, by embedding it into a protocol for ANY developer, rather than a single code base for some. By defining a “means” of communicating content between clients and relays, rather than the structure of what “should be” communicated, nostr NIPs can become the open and long lasting protocol needed for decentralized social media.
- Client “NotX” implements three content types :
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Timeu
Os quatro elementos, a esfera como a forma mais perfeita, os cinco sentidos, a dor como perturbação e o prazer como retorno, o demiurgo que cria da melhor maneira possível com a matéria que tem, o conceito de duro e mole, todas essas coisas que ensinam nas escolas e nos desenhos animados ou sei lá como entram na nossa consciência como se fossem uma verdade, mas sempre uma verdade provisória, infantil -- como os nomes infantis dos dedos (mata-piolho, fura-bolo etc.) --, que mesmo as crianças sabem que não é verdade mesmo.
Parece que todas essas coisas estão nesse livro. Talvez até mesmo a classificação dos cinco dedos como mata-piolho e tal, mas talvez eu tenha dormido nessa parte.
Me pergunto se essas coisas não eram ensinadas tradicionalmente na idade média como sendo verdade absoluta (pois afinal estava lá o Platão dizendo, em sua única obra) e persistiram até hoje numa tradição que se mantém aos trancos e barrancos, contra tudo e contra todos, sem ninguém saber como, um conhecimento em que ninguém acredita mas acha bonito mesmo assim, harmonioso, e vem despida de suas origens e fontes primárias e de todo o seu contexto perturbar o entendimento do mundo pelas crianças.
-
@ 124b23f2:02455947
2023-10-23 14:54:56Im my previous post, I explained how to use your getalby ln address to receive zaps directly to your LND node. Similarly, there is an additional option that one can utilize to receive zaps directly to your lightning node: lnaddress.com.
Lnaddress.com is a federated lightning address server that you can use to create a custom ln address. Unlike using getalby, lnaddress.com can be used with any lightning implementation (not just LND). For the purposes of this write-up, I am going to use LNBits to connect an lnaddress.com lightning address with my node. And as will be the case with most of my write-ups, I am going to be using @npub126ntw5mnermmj0znhjhgdk8lh2af72sm8qfzq48umdlnhaj9kuns3le9ll OS, so users of that OS will likely find this write-up most useful, but I'm sure people using other node interfaces can infer how to complete this set up as well.
With that said, let's dive into the step-by-step on how to create your own custom ln address with lnaddress.com and set it up to receive zaps directly to your lightning node:
*Users should have lnbits set up with their lightning node before proceeding.
-
Go to lnaddress.com. Input your desired username, select 'Node Backend Type' = LNBits, and if necessary check the box 'This is a new lightning address'. Keep this page open in one tab as we will be returning to it to input info.
-
From your Start9 OS services page, go to your LNBits service. Open the 'Properties' page, and in a new tab, open the (Tor) Superuser Account. Page will look like this:
http://usri6wfilambri77iwnapyyme43la4v2hgecoismowluriekrhuvxnid.onion/content/images/2023/10/pic1-1.png
From this LNbits page, you can choose to 'add a new wallet' and use that wallet instead of your superuser account. That is up to you but the steps will be the same.
- Now, we need to grab the info needed for the 'Host Protocol + IP or Domain + Port' field on the lnaddress.com page. On the lnbits page, expand the 'Api Docs' field, and the 'Get Wallet Details' field found on the right hand side menu. In this 'Get Wallet Details' section, you will want to copy some of the URL found in the 'curl example' section. Copy 'http://xxxxxx.onion' (don't copy any more!), and paste this into the 'Host Protocol + IP or Domain + Port' field found on the lnaddress.com page.
http://usri6wfilambri77iwnapyyme43la4v2hgecoismowluriekrhuvxnid.onion/content/images/2023/10/pic2-1.png
-
Next, we need to grab the key for your lnbits wallet. From the lnbits page, expand the API docs section found on the right hand side menu. Copy the 'Invoice/read key' (make sure to use the invoice/read key and not your Admin key), and paste it into the key field found on the lnaddress.com page. Upon pasting in that last piece of info, click 'submit' at the bottom of the page.
-
If all info was input correctly, your connection will be successful. If successful, you will be brought to a page that looks like this:
http://usri6wfilambri77iwnapyyme43la4v2hgecoismowluriekrhuvxnid.onion/content/images/2023/10/pic3-1.png
You will want to save this secret PIN in case you need to update info in your ln address. You'll also find a test lightning invoice of 1 sat. Using a wallet not connected to the node we connected to our new ln address, you can test the ln address out by paying the 1 sat test invoice.
Users of Start9 OS might find the following info particularly useful: This ln address via lnaddress.com comes with a couple advantages aside from self-custodial zap receiving:
- One, you can have a custom ln address username to go with your nym or nostr username. Users of Start9 may be familiar with the ln address one can generate in the btcpay server service. This ln address is not customizable.
- Two, if you are running a tor only lightning node, you will be able to receive zaps from both tor and clear net lightning nodes. Users of Start9 may be familiar with the ln address one can generate in the btcpay server service. This ln address can only receive zaps from other tor nodes and can't receive zaps from clear net nodes.
That is it, you should now be all set up with your new ln address hosted on lnaddress.com, and you should be all ready to receive zaps or lightning payments of any kind :)
-