-
@ 42342239:1d80db24
2024-04-05 08:21:50Trust is a topic increasingly being discussed. Whether it is trust in each other, in the media, or in our authorities, trust is generally seen as a cornerstone of a strong and well-functioning society. The topic was also the theme of the World Economic Forum at its annual meeting in Davos earlier this year. Even among central bank economists, the subject is becoming more prevalent. Last year, Agustín Carstens, head of the BIS ("the central bank of central banks"), said that "[w]ith trust, the public will be more willing to accept actions that involve short-term costs in exchange for long-term benefits" and that "trust is vital for policy effectiveness".
It is therefore interesting when central banks or others pretend as if nothing has happened even when trust has been shattered.
Just as in Sweden and in hundreds of other countries, Canada is planning to introduce a central bank digital currency (CBDC), a new form of money where the central bank or its intermediaries (the banks) will have complete insight into citizens' transactions. Payments or money could also be made programmable. Everything from transferring ownership of a car automatically after a successful payment to the seller, to payments being denied if you have traveled too far from home.
"If Canadians decide a digital dollar is necessary, our obligation is to be ready" says Carolyn Rogers, Deputy Head of Bank of Canada, in a statement shared in an article.
So, what do the citizens want? According to a report from the Bank of Canada, a whopping 88% of those surveyed believe that the central bank should refrain from developing such a currency. About the same number (87%) believe that authorities should guarantee the opportunity to pay with cash instead. And nearly four out of five people (78%) do not believe that the central bank will care about people's opinions. What about trust again?
Canadians' likely remember the Trudeau government's actions against the "Freedom Convoy". The Freedom Convoy consisted of, among others, truck drivers protesting the country's strict pandemic policies, blocking roads in the capital Ottawa at the beginning of 2022. The government invoked never-before-used emergency measures to, among other things, "freeze" people's bank accounts. Suddenly, truck drivers and those with a "connection" to the protests were unable to pay their electricity bills or insurances, for instance. Superficially, this may not sound so serious, but ultimately, it could mean that their families end up in cold houses (due to electricity being cut off) and that they lose the ability to work (driving uninsured vehicles is not taken lightly). And this applied not only to the truck drivers but also to those with a "connection" to the protests. No court rulings were required.
Without the freedom to pay for goods and services, i.e. the freedom to transact, one has no real freedom at all, as several participants in the protests experienced.
In January of this year, a federal judge concluded that the government's actions two years ago were unlawful when it invoked the emergency measures. The use did not display "features of rationality - motivation, transparency, and intelligibility - and was not justified in relation to the relevant factual and legal limitations that had to be considered". He also argued that the use was not in line with the constitution. There are also reports alleging that the government fabricated evidence to go after the demonstrators. The case is set to continue to the highest court. Prime Minister Justin Trudeau and Finance Minister Chrystia Freeland have also recently been sued for the government's actions.
The Trudeau government's use of emergency measures two years ago sadly only provides a glimpse of what the future may hold if CBDCs or similar systems replace the current monetary system with commercial bank money and cash. In Canada, citizens do not want the central bank to proceed with the development of a CBDC. In canada, citizens in Canada want to strengthen the role of cash. In Canada, citizens suspect that the central bank will not listen to them. All while the central bank feverishly continues working on the new system...
"Trust is vital", said Agustín Carstens. But if policy-makers do not pause for a thoughtful reflection even when trust has been utterly shattered as is the case in Canada, are we then not merely dealing with lip service?
And how much trust do these policy-makers then deserve?
-
@ 42342239:1d80db24
2024-03-31 11:23:36Biologist Stuart Kauffman introduced the concept of the "adjacent possible" in evolutionary biology in 1996. A bacterium cannot suddenly transform into a flamingo; rather, it must rely on small exploratory changes (of the "adjacent possible") if it is ever to become a beautiful pink flying creature. The same principle applies to human societies, all of which exemplify complex systems. It is indeed challenging to transform shivering cave-dwellers into a space travelers without numerous intermediate steps.
Imagine a water wheel – in itself, perhaps not such a remarkable invention. Yet the water wheel transformed the hard-to-use energy of water into easily exploitable rotational energy. A little of the "adjacent possible" had now been explored: water mills, hammer forges, sawmills, and textile factories soon emerged. People who had previously ground by hand or threshed with the help of oxen could now spend their time on other things. The principles of the water wheel also formed the basis for wind power. Yes, a multitude of possibilities arose – reminiscent of the rapid development during the Cambrian explosion. When the inventors of bygone times constructed humanity's first water wheel, they thus expanded the "adjacent possible". Surely, the experts of old likely sought swift prohibitions. Not long ago, our expert class claimed that the internet was going to be a passing fad, or that it would only have the same modest impact on the economy as the fax machine. For what it's worth, there were even attempts to ban the number zero back in the days.
The pseudonymous creator of Bitcoin, Satoshi Nakamoto, wrote in Bitcoin's whitepaper that "[w]e have proposed a system for electronic transactions without relying on trust." The Bitcoin system enables participants to agree on what is true without needing to trust each other, something that has never been possible before. In light of this, it is worth noting that trust in the federal government in the USA is among the lowest levels measured in almost 70 years. Trust in media is at record lows. Moreover, in countries like the USA, the proportion of people who believe that one can trust "most people" has decreased significantly. "Rebuilding trust" was even the theme of the World Economic Forum at its annual meeting. It is evident, even in the international context, that trust between countries is not at its peak.
Over a fifteen-year period, Bitcoin has enabled electronic transactions without its participants needing to rely on a central authority, or even on each other. This may not sound like a particularly remarkable invention in itself. But like the water wheel, one must acknowledge that new potential seems to have been put in place, potential that is just beginning to be explored. Kauffman's "adjacent possible" has expanded. And despite dogmatic statements to the contrary, no one can know for sure where this might lead.
The discussion of Bitcoin or crypto currencies would benefit from greater humility and openness, not only from employees or CEOs of money laundering banks but also from forecast-failing central bank officials. When for instance Chinese Premier Zhou Enlai in the 1970s was asked about the effects of the French Revolution, he responded that it was "too early to say" - a far wiser answer than the categorical response of the bureaucratic class. Isn't exploring systems not based on trust is exactly what we need at this juncture?
-
@ b12b632c:d9e1ff79
2024-03-23 16:42:49CASHU AND ECASH ARE EXPERIMENTAL PROJECTS. BY THE OWN NATURE OF CASHU ECASH, IT'S REALLY EASY TO LOSE YOUR SATS BY LACKING OF KNOWLEDGE OF THE SYSTEM MECHANICS. PLEASE, FOR YOUR OWN GOOD, ALWAYS USE FEW SATS AMOUNT IN THE BEGINNING TO FULLY UNDERSTAND HOW WORKS THE SYSTEM. ECASH IS BASED ON A TRUST RELATIONSHIP BETWEEN YOU AND THE MINT OWNER, PLEASE DONT TRUST ECASH MINT YOU DONT KNOW. IT IS POSSIBLE TO GENERATE UNLIMITED ECASH TOKENS FROM A MINT, THE ONLY WAY TO VALIDATE THE REAL EXISTENCE OF THE ECASH TOKENS IS TO DO A MULTIMINT SWAP (BETWEEN MINTS). PLEASE, ALWAYS DO A MULTISWAP MINT IF YOU RECEIVE SOME ECASH FROM SOMEONE YOU DON'T KNOW/TRUST. NEVER TRUST A MINT YOU DONT KNOW!
IF YOU WANT TO RUN AN ECASH MINT WITH A BTC LIGHTNING NODE IN BACK-END, PLEASE DEDICATE THIS LN NODE TO YOUR ECASH MINT. A BAD MANAGEMENT OF YOUR LN NODE COULD LET PEOPLE TO LOOSE THEIR SATS BECAUSE THEY HAD ONCE TRUSTED YOUR MINT AND YOU DID NOT MANAGE THE THINGS RIGHT.
What's ecash/Cashu ?
I recently listened a passionnating interview from calle 👁️⚡👁 invited by the podcast channel What Bitcoin Did about the new (not so much now) Cashu protocol.
Cashu is a a free and open-source Chaumian ecash project built for Bitcoin protocol, recently created in order to let users send/receive Ecash over BTC Lightning network. The main Cashu ecash goal is to finally give you a "by-design" privacy mechanism to allow us to do anonymous Bitcoin transactions.
Ecash for your privacy.\ A Cashu mint does not know who you are, what your balance is, or who you're transacting with. Users of a mint can exchange ecash privately without anyone being able to know who the involved parties are. Bitcoin payments are executed without anyone able to censor specific users.
Here are some useful links to begin with Cashu ecash :
Github repo: https://github.com/cashubtc
Documentation: https://docs.cashu.space
To support the project: https://docs.cashu.space/contribute
A Proof of Liabilities Scheme for Ecash Mints: https://gist.github.com/callebtc/ed5228d1d8cbaade0104db5d1cf63939
Like NOSTR and its own NIPS, here is the list of the Cashu ecash NUTs (Notation, Usage, and Terminology): https://github.com/cashubtc/nuts?tab=readme-ov-file
I won't explain you at lot more on what's Casu ecash, you need to figured out by yourself. It's really important in order to avoid any mistakes you could do with your sats (that you'll probably regret).
If you don't have so much time, you can check their FAQ right here: https://docs.cashu.space/faq
I strongly advise you to listen Calle's interviews @whatbbitcoindid to "fully" understand the concept and the Cashu ecash mechanism before using it:
Scaling Bitcoin Privacy with Calle
In the meantime I'm writing this article, Calle did another really interesting interview with ODELL from CitadelDispatch:
CD120: BITCOIN POWERED CHAUMIAN ECASH WITH CALLE
Which ecash apps?
There are several ways to send/receive some Ecash tokens, you can do it by using mobile applications like eNuts, Minibits or by using Web applications like Cashu.me, Nustrache or even npub.cash. On these topics, BTC Session Youtube channel offers high quality contents and very easy to understand key knowledge on how to use these applications :
Minibits BTC Wallet: Near Perfect Privacy and Low Fees - FULL TUTORIAL
Cashu Tutorial - Chaumian Ecash On Bitcoin
Unlock Perfect Privacy with eNuts: Instant, Free Bitcoin Transactions Tutorial
Cashu ecash is a very large and complex topic for beginners. I'm still learning everyday how it works and the project moves really fast due to its commited developpers community. Don't forget to follow their updates on Nostr to know more about the project but also to have a better undertanding of the Cashu ecash technical and political implications.
There is also a Matrix chat available if you want to participate to the project:
https://matrix.to/#/#cashu:matrix.org
How to self-host your ecash mint with Nutshell
Cashu Nutshell is a Chaumian Ecash wallet and mint for Bitcoin Lightning. Cashu Nutshell is the reference implementation in Python.
Github repo:
https://github.com/cashubtc/nutshell
Today, Nutshell is the most advanced mint in town to self-host your ecash mint. The installation is relatively straightforward with Docker because a docker-compose file is available from the github repo.
Nutshell is not the only cashu ecash mint server available, you can check other server mint here :
https://docs.cashu.space/mints
The only "external" requirement is to have a funding source. One back-end funding source where ecash will mint your ecash from your Sats and initiate BTC Lightning Netwok transactions between ecash mints and BTC Ligtning nodes during a multimint swap. Current backend sources supported are: FakeWallet*, LndRestWallet, CoreLightningRestWallet, BlinkWallet, LNbitsWallet, StrikeUSDWallet.
*FakeWallet is able to generate unlimited ecash tokens. Please use it carefully, ecash tokens issued by the FakeWallet can be sent and accepted as legit ecash tokens to other people ecash wallets if they trust your mint. In the other way, if someone send you 2,3M ecash tokens, please don't trust the mint in the first place. You need to force a multimint swap with a BTC LN transaction. If that fails, someone has maybe tried to fool you.
I used a Voltage.cloud BTC LN node instance to back-end my Nutshell ecash mint:
SPOILER: my nutshell mint is working but I have an error message "insufficient balance" when I ask a multiswap mint from wallet.cashu.me or the eNuts application. In order to make it work, I need to add some Sats liquidity (I can't right now) to the node and open few channels with good balance capacity. If you don't have an ecash mint capable of doig multiswap mint, you'll only be able to mint ecash into your ecash mint and send ecash tokens to people trusting your mint. It's working, yes, but you need to be able to do some mutiminit swap if you/everyone want to fully profit of the ecash system.
Once you created your account and you got your node, you need to git clone the Nutshell github repo:
git clone https://github.com/cashubtc/nutshell.git
You next need to update the docker compose file with your own settings. You can comment the wallet container if you don't need it.
To generate a private key for your node, you can use this openssl command
openssl rand -hex 32 054de2a00a1d8e3038b30e96d26979761315cf48395aa45d866aeef358c91dd1
The CLI Cashu wallet is not needed right now but I'll show you how to use it in the end of this article. Feel free to comment it or not.
``` version: "3" services: mint: build: context: . dockerfile: Dockerfile container_name: mint
ports:
- "3338:3338"
environment:
- DEBUG=TRUE
- LOG_LEVEL=DEBUG
- MINT_URL=https://YourMintURL - MINT_HOST=YourMintDomain.tld - MINT_LISTEN_HOST=0.0.0.0 - MINT_LISTEN_PORT=3338 - MINT_PRIVATE_KEY=YourPrivateKeyFromOpenSSL - MINT_INFO_NAME=YourMintInfoName - MINT_INFO_DESCRIPTION=YourShortInfoDesc - MINT_INFO_DESCRIPTION_LONG=YourLongInfoDesc - MINT_LIGHTNING_BACKEND=LndRestWallet #- MINT_LIGHTNING_BACKEND=FakeWallet - MINT_INFO_CONTACT=[["email","YourConctact@email"], ["twitter","@YourTwitter"], ["nostr", "YourNPUB"]] - MINT_INFO_MOTD=Thanks for using my mint! - MINT_LND_REST_ENDPOINT=https://YourVoltageNodeDomain:8080 - MINT_LND_REST_MACAROON=YourDefaultAdminMacaroonBase64 - MINT_MAX_PEG_IN=100000 - MINT_MAX_PEG_OUT=100000 - MINT_PEG_OUT_ONLY=FALSE command: ["poetry", "run", "mint"]
wallet-voltage: build: context: . dockerfile: Dockerfile container_name: wallet-voltage
ports:
- "4448:4448"
depends_on: - nutshell-voltage environment:
- DEBUG=TRUE
- MINT_URL=http://nutshell-voltage:3338
- API_HOST=0.0.0.0 command: ["poetry", "run", "cashu", "-d"]
```
To build, run and see the container logs:
docker compose up -d && docker logs -f mint
0.15.1 2024-03-22 14:45:45.490 | WARNING | cashu.lightning.lndrest:__init__:49 - no certificate for lndrest provided, this only works if you have a publicly issued certificate 2024-03-22 14:45:45.557 | INFO | cashu.core.db:__init__:135 - Creating database directory: data/mint 2024-03-22 14:45:45.68 | INFO | Started server process [1] 2024-03-22 14:45:45.69 | INFO | Waiting for application startup. 2024-03-22 14:45:46.12 | INFO | Loaded 0 keysets from database. 2024-03-22 14:45:46.37 | INFO | Current keyset: 003dba9e589023f1 2024-03-22 14:45:46.37 | INFO | Using LndRestWallet backend for method: 'bolt11' and unit: 'sat' 2024-03-22 14:45:46.97 | INFO | Backend balance: 1825000 sat 2024-03-22 14:45:46.97 | INFO | Data dir: /root/.cashu 2024-03-22 14:45:46.97 | INFO | Mint started. 2024-03-22 14:45:46.97 | INFO | Application startup complete. 2024-03-22 14:45:46.98 | INFO | Uvicorn running on http://0.0.0.0:3338 (Press CTRL+C to quit) 2024-03-22 14:45:47.27 | INFO | 172.19.0.22:48528 - "GET /v1/keys HTTP/1.1" 200 2024-03-22 14:45:47.34 | INFO | 172.19.0.22:48544 - "GET /v1/keysets HTTP/1.1" 200 2024-03-22 14:45:47.38 | INFO | 172.19.0.22:48552 - "GET /v1/info HTTP/1.1" 200
If you see the line :
Uvicorn running on http://0.0.0.0:3338 (Press CTRL+C to quit)
Nutshell is well started.
I won't explain here how to create a reverse proxy to Nutshell, you can find how to do it into my previous article. Here is the reverse proxy config into Nginx Proxy Manager:
If everything is well configured and if you go on your mint url (https://yourminturl) you shoud see this:
It's not helping a lot because at first glance it seems to be not working but it is. You can also check these URL path to confirm :
- https://yourminturl/keys and https://yourminturl/keysets
or
- https://yourminturl/v1/keys and https://yourminturl/v1/keysets
Depending of the moment when you read this article, the first URLs path might have been migrated to V1. Here is why:
https://github.com/cashubtc/nuts/pull/55
The final test is to add your mint to your prefered ecash wallets.
SPOILER: AT THIS POINT, YOU SHOUD KNOW THAT IF YOU RESET YOUR LOCAL BROWSER INTERNET CACHE FILE, YOU'LL LOSE YOUR MINTED ECASH TOKENS. IF NOT, PLEASE READ THE DOCUMENTATION AGAIN.
For instace, if we use wallet.cashu.me:
You can go into the "Settings" tab and add your mint :
If everything went find, you shoud see this :
You can now mint some ecash from your mint creating a sats invoice :
You can now scan the QR diplayed with your prefered BTC LN wallet. If everything is OK, you should receive the funds:
It may happen that some error popup sometimes. If you are curious and you want to know what happened, Cashu wallet has a debug console you can activate by clicking on the "Settings" page and "OPEN DEBUG TERMINAL". A little gear icon will be displayed in the bottom of the screen. You can click on it, go to settings and enable "Auto Display If Error Occurs" and "Display Extra Information". After enabling this setting, you can close the popup windows and let the gear icon enabled. If any error comes, this windows will open again and show you thé error:
Now that you have some sats in your balance, you can try to send some ecash. Open in a new windows another ecash wallet like Nutstach for instance.
Add your mint again :
Return on Cashu wallet. The ecash token amount you see on the Cashu wallet home page is a total of all the ecash tokens you have on all mint connected.
Next, click on "Send ecach". Insert the amout of ecash you want to transfer to your other wallet. You can select the wallet where you want to extract the funds by click on the little arrow near the sats funds you currenly selected :
Click now on "SEND TOKENS". That will open you a popup with a QR code and a code CONTAINING YOUR ECASH TOKENS (really).
You can now return on nutstach, click on the "Receive" button and paste the code you get from Cashu wallet:
Click on "RECEIVE" again:
Congrats, you transfered your first ecash tokens to yourself ! 🥜⚡
You may need some time to transfer your ecash tokens between your wallets and your mint, there is a functionality existing for that called "Multimint swaps".
Before that, if you need new mints, you can check the very new website Bitcoinmints.com that let you see the existing ecash mints and rating :
Don't forget, choose your mint carefuly because you don't know who's behind.
Let's take a mint and add it to our Cashu wallet:
If you want to transfer let's say 20 sats from minibits mint to bitcointxoko mint, go just bottom into the "Multimint swap" section. Select the mint into "Swap from mint", the mint into "Swap to mint" and click on "SWAP" :
A popup window will appear and will request the ecash tokens from the source mint. It will automatically request the ecash amount via a Lightning node transaction and add the fund to your other wallet in the target mint. As it's a Lightning Network transaction, you can expect some little fees.
If everything is OK with the mints, the swap will be successful and the ecash received.
You can now see that the previous sats has been transfered (minus 2 fee sats).
Well done, you did your first multimint swap ! 🥜⚡
One last thing interresting is you can also use CLI ecash wallet. If you created the wallet contained in the docker compose, the container should be running.
Here are some commands you can do.
To verify which mint is currently connected :
``` docker exec -it wallet-voltage poetry run cashu info
2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:57:24.91 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:57:24.92 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat Version: 0.15.1 Wallet: wallet Debug: True Cashu dir: /root/.cashu Mints: - https://nutshell-voltage.fractalized.net ```
To verify your balance :
``` docker exec -it wallet-voltage poetry run cashu balance
2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:59:26.67 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:738 | Wallet initialized 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:739 | Mint URL: https://nutshell-voltage.fractalized.net 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:740 | Database: /root/.cashu/wallet 2024-03-22 21:59:26.68 | DEBUG | cashu.wallet.wallet:init:741 | Unit: sat Balance: 0 sat ```
To create an sats invoice to have ecash :
``` docker exec -it wallet-voltage poetry run cashu invoice 20
2024-03-22 22:00:59.12 | DEBUG | cashu.wallet.wallet:_load_mint_info:275 | Mint info: name='nutshell.fractalized.net' pubkey='02008469922e985cbc5368ce16adb6ed1aaea0f9ecb21639db4ded2e2ae014a326' version='Nutshell/0.15.1' description='Official Fractalized Mint' description_long='TRUST THE MINT' contact=[['email', 'pastagringo@fractalized.net'], ['twitter', '@pastagringo'], ['nostr', 'npub1ky4kxtyg0uxgw8g5p5mmedh8c8s6sqny6zmaaqj44gv4rk0plaus3m4fd2']] motd='Thanks for using official ecash fractalized mint!' nuts={4: {'methods': [['bolt11', 'sat']], 'disabled': False}, 5: {'methods': [['bolt11', 'sat']], 'disabled': False}, 7: {'supported': True}, 8: {'supported': True}, 9: {'supported': True}, 10: {'supported': True}, 11: {'supported': True}, 12: {'supported': True}} Balance: 0 sat
Pay invoice to mint 20 sat:
Invoice: lnbc200n1pjlmlumpp5qh68cqlr2afukv9z2zpna3cwa3a0nvla7yuakq7jjqyu7g6y69uqdqqcqzzsxqyz5vqsp5zymmllsqwd40xhmpu76v4r9qq3wcdth93xthrrvt4z5ct3cf69vs9qyyssqcqppurrt5uqap4nggu5tvmrlmqs5guzpy7jgzz8szckx9tug4kr58t4avv4a6437g7542084c6vkvul0ln4uus7yj87rr79qztqldggq0cdfpy
You can use this command to check the invoice: cashu invoice 20 --id 2uVWELhnpFcNeFZj6fWzHjZuIipqyj5R8kM7ZJ9_
Checking invoice .................2024-03-22 22:03:25.27 | DEBUG | cashu.wallet.wallet:verify_proofs_dleq:1103 | Verified incoming DLEQ proofs. Invoice paid.
Balance: 20 sat ```
To pay an invoice by pasting the invoice you received by your or other people :
``` docker exec -it wallet-voltage poetry run cashu pay lnbc150n1pjluqzhpp5rjezkdtt8rjth4vqsvm50xwxtelxjvkq90lf9tu2thsv2kcqe6vqdq2f38xy6t5wvcqzzsxqrpcgsp58q9sqkpu0c6s8hq5pey8ls863xmjykkumxnd8hff3q4fvxzyh0ys9qyyssq26ytxay6up54useezjgqm3cxxljvqw5vq2e94ru7ytqc0al74hr4nt5cwpuysgyq8u25xx5la43mx4ralf3mq2425xmvhjzvwzqp54gp0e3t8e
2024-03-22 22:04:37.23 | DEBUG | cashu.wallet.wallet:_load_mint_info:275 | Mint info: name='nutshell.fractalized.net' pubkey='02008469922e985cbc5368ce16adb6ed1aaea0f9ecb21639db4ded2e2ae014a326' version='Nutshell/0.15.1' description='Official Fractalized Mint' description_long='TRUST THE MINT' contact=[['email', 'pastagringo@fractalized.net'], ['twitter', '@pastagringo'], ['nostr', 'npub1ky4kxtyg0uxgw8g5p5mmedh8c8s6sqny6zmaaqj44gv4rk0plaus3m4fd2']] motd='Thanks for using official ecash fractalized mint!' nuts={4: {'methods': [['bolt11', 'sat']], 'disabled': False}, 5: {'methods': [['bolt11', 'sat']], 'disabled': False}, 7: {'supported': True}, 8: {'supported': True}, 9: {'supported': True}, 10: {'supported': True}, 11: {'supported': True}, 12: {'supported': True}} Balance: 20 sat 2024-03-22 22:04:37.45 | DEBUG | cashu.wallet.wallet:get_pay_amount_with_fees:1529 | Mint wants 0 sat as fee reserve. 2024-03-22 22:04:37.45 | DEBUG | cashu.wallet.cli.cli:pay:189 | Quote: quote='YpNkb5f6WVT_5ivfQN1OnPDwdHwa_VhfbeKKbBAB' amount=15 fee_reserve=0 paid=False expiry=1711146847 Pay 15 sat? [Y/n]: y Paying Lightning invoice ...2024-03-22 22:04:41.13 | DEBUG | cashu.wallet.wallet:split:613 | Calling split. POST /v1/swap 2024-03-22 22:04:41.21 | DEBUG | cashu.wallet.wallet:verify_proofs_dleq:1103 | Verified incoming DLEQ proofs. Error paying invoice: Mint Error: Lightning payment unsuccessful. insufficient_balance (Code: 20000) ```
It didn't work, yes. That's the thing I told you earlier but it would work with a well configured and balanced Lightning Node.
That's all ! You should now be able to use ecash as you want! 🥜⚡
See you on NOSTR! 🤖⚡\ PastaGringo
-
@ ee11a5df:b76c4e49
2024-03-22 23:49:09Implementing The Gossip Model
version 2 (2024-03-23)
Introduction
History
The gossip model is a general concept that allows clients to dynamically follow the content of people, without specifying which relay. The clients have to figure out where each person puts their content.
Before NIP-65, the gossip client did this in multiple ways:
- Checking kind-3 contents, which had relay lists for configuring some clients (originally Astral and Damus), and recognizing that wherever they were writing our client could read from.
- NIP-05 specifying a list of relays in the
nostr.json
file. I added this to NIP-35 which got merged down into NIP-05. - Recommended relay URLs that are found in 'p' tags
- Users manually making the association
- History of where events happen to have been found. Whenever an event came in, we associated the author with the relay.
Each of these associations were given a score (recommended relay urls are 3rd party info so they got a low score).
Later, NIP-65 made a new kind of relay list where someone could advertise to others which relays they use. The flag "write" is now called an OUTBOX, and the flag "read" is now called an INBOX.
The idea of inboxes came about during the development of NIP-65. They are a way to send an event to a person to make sure they get it... because putting it on your own OUTBOX doesn't guarantee they will read it -- they may not follow you.
The outbox model is the use of NIP-65. It is a subset of the gossip model which uses every other resource at it's disposal.
Rationale
The gossip model keeps nostr decentralized. If all the (major) clients were using it, people could spin up small relays for both INBOX and OUTBOX and still be fully connected, have their posts read, and get replies and DMs. This is not to say that many people should spin up small relays. But the task of being decentralized necessitates that people must be able to spin up their own relay in case everybody else is censoring them. We must make it possible. In reality, congregating around 30 or so popular relays as we do today is not a problem. Not until somebody becomes very unpopular with bitcoiners (it will probably be a shitcoiner), and then that person is going to need to leave those popular relays and that person shouldn't lose their followers or connectivity in any way when they do.
A lot more rationale has been discussed elsewhere and right now I want to move on to implementation advice.
Implementation Advice
Read NIP-65
NIP-65 will contain great advice on which relays to consult for which purposes. This post does not supersede NIP-65. NIP-65 may be getting some smallish changes, mostly the addition of a private inbox for DMs, but also changes to whether you should read or write to just some or all of a set of relays.
How often to fetch kind-10002 relay lists for someone
This is up to you. Refreshing them every hour seems reasonable to me. Keeping track of when you last checked so you can check again every hour is a good idea.
Where to fetch events from
If your user follows another user (call them jack), then you should fetch jack's events from jack's OUTBOX relays. I think it's a good idea to use 2 of those relays. If one of those choices fails (errors), then keep trying until you get 2 of them that worked. This gives some redundancy in case one of them is censoring. You can bump that number up to 3 or 4, but more than that is probably just wasting bandwidth.
To find events tagging your user, look in your user's INBOX relays for those. In this case, look into all of them because some clients will only write to some of them (even though that is no longer advised).
Picking relays dynamically
Since your user follows many other users, it is very useful to find a small subset of all of their OUTBOX relays that cover everybody followed. I wrote some code to do this as (it is used by gossip) that you can look at for an example.
Where to post events to
Post all events (except DMs) to all of your users OUTBOX relays. Also post the events to all the INBOX relays of anybody that was tagged or mentioned in the contents in a nostr bech32 link (if desired). That way all these mentioned people are aware of the reply (or quote or repost).
DMs should be posted only to INBOX relays (in the future, to PRIVATE INBOX relays). You should post it to your own INBOX relays also, because you'll want a record of the conversation. In this way, you can see all your DMs inbound and outbound at your INBOX relay.
Where to publish your user's kind-10002 event to
This event was designed to be small and not require moderation, plus it is replaceable so there is only one per user. For this reason, at the moment, just spread it around to lots of relays especially the most popular relays.
For example, the gossip client automatically determines which relays to publish to based on whether they seem to be working (several hundred) and does so in batches of 10.
How to find replies
If all clients used the gossip model, you could find all the replies to any post in the author's INBOX relays for any event with an 'e' tag tagging the event you want replies to... because gossip model clients will publish them there.
But given the non-gossip-model clients, you should also look where the event was seen and look on those relays too.
Clobbering issues
Please read your users kind 10002 event before clobbering it. You should look many places to make sure you didn't miss the newest one.
If the old relay list had tags you don't understand (e.g. neither "read" nor "write"), then preserve them.
How users should pick relays
Today, nostr relays are not uniform. They have all kinds of different rule-sets and purposes. We severely lack a way to advice non-technical users as to which relays make good OUTBOX relays and which ones make good INBOX relays. But you are a dev, you can figure that out pretty well. For example, INBOX relays must accept notes from anyone meaning they can't be paid-subscription relays.
Bandwidth isn't a big issue
The outbox model doesn't require excessive bandwidth when done right. You shouldn't be downloading the same note many times... only 2-4 times depending on the level of redundancy your user wants.
Downloading 1000 events from 100 relays is in theory the same amount of data as downloading 1000 events from 1 relay.
But in practice, due to redundancy concerns, you will end up downloading 2000-3000 events from those 100 relays instead of just the 1000 you would in a single relay situation. Remember, per person followed, you will only ask for their events from 2-4 relays, not from all 100 relays!!!
Also in practice, the cost of opening and maintaining 100 network connections is more than the cost of opening and maintaining just 1. But this isn't usually a big deal unless...
Crypto overhead on Low-Power Clients
Verifying Schnorr signatures in the secp256k1 cryptosystem is not cheap. Setting up SSL key exchange is not cheap either. But most clients will do a lot more event signature validations than they will SSL setups.
For this reason, connecting to 50-100 relays is NOT hugely expensive for clients that are already verifying event signatures, as the number of events far surpasses the number of relay connections.
But for low-power clients that can't do event signature verification, there is a case for them not doing a lot of SSL setups either. Those clients would benefit from a different architecture, where half of the client was on a more powerful machine acting as a proxy for the low-power half of the client. These halves need to trust each other, so perhaps this isn't a good architecture for a business relationship, but I don't know what else to say about the low-power client situation.
Unsafe relays
Some people complain that the outbox model directs their client to relays that their user has not approved. I don't think it is a big deal, as such users can use VPNs or Tor if they need privacy. But for such users that still have concerns, they may wish to use clients that give them control over this. As a client developer you can choose whether to offer this feature or not.
The gossip client allows users to require whitelisting for connecting to new relays and for AUTHing to relays.
See Also
-
@ 42342239:1d80db24
2024-03-21 09:49:01It has become increasingly evident that our financial system has started undermine our constitutionally guaranteed freedoms and rights. Payment giants like PayPal, Mastercard, and Visa sometimes block the ability to donate money. Individuals, companies, and associations lose bank accounts — or struggle to open new ones. In bank offices, people nowadays risk undergoing something resembling being cross-examined. The regulations are becoming so cumbersome that their mere presence risks tarnishing the banks' reputation.
The rules are so complex that even within the same bank, different compliance officers can provide different answers to the same question! There are even departments where some of the compliance officers are reluctant to provide written responses and prefer to answer questions over an unrecorded phone call. Last year's corporate lawyer in Sweden recently complained about troublesome bureaucracy, and that's from a the perspective of a very large corporation. We may not even fathom how smaller businesses — the keys to a nation's prosperity — experience it.
Where do all these rules come?
Where do all these rules come from, and how well do they work? Today's regulations on money laundering (AML) and customer due diligence (KYC - know your customer) primarily originate from a G7 meeting in the summer of 1989. (The G7 comprises the seven advanced economies: the USA, Canada, the UK, Germany, France, Italy, and Japan, along with the EU.) During that meeting, the intergovernmental organization FATF (Financial Action Task Force) was established with the aim of combating organized crime, especially drug trafficking. Since then, its mandate has expanded to include fighting money laundering, terrorist financing, and the financing of the proliferation of weapons of mass destruction(!). One might envisage the rules soon being aimed against proliferation of GPUs (Graphics Processing Units used for AI/ML). FATF, dominated by the USA, provides frameworks and recommendations for countries to follow. Despite its influence, the organization often goes unnoticed. Had you heard of it?
FATF offered countries "a deal they couldn't refuse"
On the advice of the USA and G7 countries, the organization decided to begin grading countries in "blacklists" and "grey lists" in 2000, naming countries that did not comply with its recommendations. The purpose was to apply "pressure" to these countries if they wanted to "retain their position in the global economy." The countries were offered a deal they couldn't refuse, and the number of member countries rapidly increased. Threatening with financial sanctions in this manner has even been referred to as "extraterritorial bullying." Some at the time even argued that the process violated international law.
If your local Financial Supervisory Authority (FSA) were to fail in enforcing compliance with FATF's many checklists among financial institutions, the risk of your country and its banks being barred from the US-dominated financial markets would loom large. This could have disastrous consequences.
A cost-benefit analysis of AML and KYC regulations
Economists use cost-benefit analysis to determine whether an action or a policy is successful. Let's see what such an analysis reveals.
What are the benefits (or revenues) after almost 35 years of more and more rules and regulations? The United Nations Office on Drugs and Crime estimated that only 0.2% of criminal proceeds are confiscated. Other estimates suggest a success rate from such anti-money laundering rules of 0.07% — a rounding error for organized crime. Europol expects to recover 1.2 billion euros annually, equivalent to about 1% of the revenue generated in the European drug market (110 billion euros). However, the percentage may be considerably lower, as the size of the drug market is likely underestimated. Moreover, there are many more "criminal industries" than just the drug trade; human trafficking is one example - there are many more. In other words, criminal organizations retain at least 99%, perhaps even 99.93%, of their profits, despite all cumbersome rules regarding money laundering and customer due diligence.
What constitutes the total cost of this bureaurcratic activity, costs that eventually burden taxpayers and households via higher fees? Within Europe, private financial firms are estimated to spend approximately 144 billion euros on compliance. According to some estimates, the global cost is twice as high, perhaps even eight times as much.
For Europe, the cost may thus be about 120 times (144/1.2) higher than the revenues from these measures. These "compliance costs" bizarrely exceed the total profits from the drug market, as one researcher put it. Even though the calculations are uncertain, it is challenging — perhaps impossible — to legitimize these regulations from a cost-benefit perspective.
But it doesn't end there, unfortunately. The cost of maintaining this compliance circus, with around 80 international organizations, thousands of authorities, far more employees, and all this across hundreds of countries, remains a mystery. But it's unlikely to be cheap.
The purpose of a system is what it does
In Economic Possibilities for our Grandchildren (1930), John Maynard Keynes foresaw that thanks to technological development, we could have had a 15-hour workweek by now. This has clearly not happened. Perhaps jobs have been created that are entirely meaningless? Anthropologist David Graeber argued precisely this in Bullshit Jobs in 2018. In that case, a significant number of people spend their entire working lives performing tasks they suspect deep down don't need to be done.
"The purpose of a system is what it does" is a heuristic coined by Stafford Beer. He observed there is "no point in claiming that the purpose of a system is to do what it constantly fails to do. What the current regulatory regime fails to do is combat criminal organizations. Nor does it seem to prevent banks from laundering money as never before, or from providing banking services to sex-offending traffickers
What the current regulatory regime does do, is: i) create armies of meaningless jobs, ii) thereby undermining mental health as well as economic prosperity, while iii) undermining our freedom and rights.
What does this say about the purpose of the system?
-
@ ee11a5df:b76c4e49
2024-03-21 00:28:47I'm glad to see more activity and discussion about the gossip model. Glad to see fiatjaf and Jack posting about it, as well as many developers pitching in in the replies. There are difficult problems we need to overcome, and finding notes while remaining decentralized without huge note copying overhead was just the first. While the gossip model (including the outbox model which is just the NIP-65 part) completely eliminates the need to copy notes around to lots of relays, and keeps us decentralized, it brings about it's own set of new problems. No community is ever of the same mind on any issue, and this issue is no different. We have a lot of divergent opinions. This note will be my updated thoughts on these topics.
COPYING TO CENTRAL RELAYS IS A NON-STARTER: The idea that you can configure your client to use a few popular "centralized" relays and everybody will copy notes into those central relays is a non-starter. It destroys the entire raison d'être of nostr. I've heard people say that more decentralization isn't our biggest issue. But decentralization is THE reason nostr exists at all, so we need to make sure we live up to the hype. Otherwise we may as well just all join Bluesky. It has other problems too: the central relays get overloaded, and the notes get copied to too many relays, which is both space-inefficient and network bandwith inefficient.
ISSUE 1: Which notes should I fetch from which relays? This is described pretty well now in NIP-65. But that is only the "outbox" model part. The "gossip model" part is to also work out what relays work for people who don't publish a relay list.
ISSUE 2: Automatic failover. Apparently Peter Todd's definition of decentralized includes a concept of automatic failover, where new resources are brought up and users don't need to do anything. Besides this not being part of any definition of decentralized I have never heard of, we kind of have this. If a user has 5 outboxes, and 3 fail, everything still works. Redundancy is built in. No user intervention needed in most cases, at least in the short term. But we also don't have any notion of administrators who can fix this behind the scenes for the users. Users are sovereign and that means they have total control, but also take on some responsibility. This is obvious when it comes to keypair management, but it goes further. Users have to manage where they post and where they accept incoming notes, and when those relays fail to serve them they have to change providers. Putting the users in charge, and not having administrators, is kinda necessary to be truly decentralized.
ISSUE 3: Connecting to unvetted relays feels unsafe. It might even be the NSA tracking you! First off, this happens with your web browser all the time: you go visit a web page and it instructs your browser to fetch a font from google. If you don't like it, you can use uBlock origin and manage it manually. In the nostr world, if you don't like it, you can use a client that puts you more in control of this. The gossip client for example has options for whether you want to manually approve relay connections and AUTHs, just once or always, and always lets you change your mind later. If you turn those options on, initially it is a giant wall of approval requests... but that situation resolves rather quickly. I've been running with these options on for a long time now, and only about once a week do I have to make a decision for a new relay.
But these features aren't really necessary for the vast majority of users who don't care if a relay knows their IP address. Those users use VPNs or Tor when they want to be anonymous, and don't bother when they don't care (me included).
ISSUE 4: Mobile phone clients may find the gossip model too costly in terms of battery life. Bandwidth is actually not a problem: under the gossip model (if done correctly) events for user P are only downloaded from N relays (default for gossip client is N=2), which in general is FEWER events retrieved than other models which download the same event maybe 8 or more times. Rather, the problem here is the large number of network connections and in particular, the large number of SSL setups and teardowns. If it weren't for SSL, this wouldn't be much of a problem. But setting up and tearing down SSL on 50 simultaneous connections that drop and pop up somewhat frequently is a battery drain.
The solution to this that makes the most sense to me is to have a client proxy. What I mean by that is a piece of software on a server in a data centre. The client proxy would be a headless nostr client that uses the gossip model and acts on behalf of the phone client. The phone client doesn't even have to be a nostr client, but it might as well be a nostr client that just connects to this fixed proxy to read and write all of its events. Now the SSL connection issue is solved. These proxies can serve many clients and have local storage, whereas the phones might not even need local storage. Since very few users will set up such things for themselves, this is a business opportunity for people, and a better business opportunity IMHO than running a paid-for relay. This doesn't decentralize nostr as there can be many of these proxies. It does however require a trust relationship between the phone client and the proxy.
ISSUE 5: Personal relays still need moderation. I wrongly thought for a very long time that personal relays could act as personal OUTBOXes and personal INBOXes without needing moderation. Recently it became clear to me that clients should probably read from other people's INBOXes to find replies to events written by the user of that INBOX (which outbox model clients should be putting into that INBOX). If that is happening, then personal relays will need to serve to the public events that were just put there by the public, thus exposing them to abuse. I'm greatly disappointed to come to this realization and not quite settled about it yet, but I thought I had better make this known.
-
@ b12b632c:d9e1ff79
2024-02-19 19:18:46Nostr decentralized network is growing exponentially day by day and new stuff comes out everyday. We can now use a NIP46 server to proxify our nsec key to avoid to use it to log on Nostr websites and possibly leak it, by mistake or by malicious persons. That's the point of this tutorial, setup a NIP46 server Nsec.app with its own Nostr relay. You'll be able to use it for you and let people use it, every data is stored locally in your internet browser. It's an non-custodial application, like wallets !
It's nearly a perfect solution (because nothing is perfect as we know) and that makes the daily use of Nostr keys much more secure and you'll see, much more sexy ! Look:
Nsec.app is not the only NIP46 server, in fact, @PABLOF7z was the first to create a NIP46 server called nsecBunker. You can also self-hosted nsecBunkerd, you can find a detailed explanation here : nsecbunkerd. I may write a how to self-host nsecBunkderd soon.
If you want more information about its bunker and what's behind this tutorial, you can check these links :
Few stuffs before beginning
Spoiler : I didn't automatized everything. The goal here is not to give you a full 1 click installation process, it's more to let you see and understand all the little things to configure and understand how works Nsec.app and the NIP46. There is a little bit of work, yes, but you'll be happy when it will work! Believe me.
Before entering into the battlefield, you must have few things : A working VPS with direct access to internet or a computer at home but NAT will certain make your life a hell. Use a VPS instead, on DigitalOcean, Linode, Scaleway, as you wish. A web domain that your own because we need to use at least 3 DNS A records (you can choose the subdomain you like) : domain.tld, noauth.domain.tld, noauth.domain.tld. You need to have some programs already installed : git, docker, docker-compose, nano/vi. if you fill in all the boxes, we can move forward !
Let's install everything !
I build a repo with a docker-compose file with all the required stuff to make the Bunker works :
Nsec.app front-end : noauth Nsec.app back-end : noauthd Nostr relay : strfry Nostr NIP05 : easy-nip5
First thing to do is to clone the repo "nsec-app-docker" from my repo:
$ git clone git clone https://github.com/PastaGringo/nsec-app-docker.git $ cd nsec-app-docker
When it's done, you'll have to do several things to make it work. 1) You need to generate some keys for the web-push library (keep them for later) :
``` $ docker run pastagringo/web-push-generate-keys
Generating your web-push keys...
Your private key : rQeqFIYKkInRqBSR3c5iTE3IqBRsfvbq_R4hbFHvywE Your public key : BFW4TA-lUvCq_az5fuQQAjCi-276wyeGUSnUx4UbGaPPJwEemUqp3Rr3oTnxbf0d4IYJi5mxUJOY4KR3ZTi3hVc ```
2) Generate a new keys pair (nsec/npub) for the NIP46 server by clicking on "Generate new key" from NostrTool website: nostrtool.com.
You should have something like this :
console Nostr private key (nsec): keep this -> nsec1zcyanx8zptarrmfmefr627zccrug3q2vhpfnzucq78357hshs72qecvxk6 Nostr private key (hex): 1609d998e20afa31ed3bca47a57858c0f888814cb853317300f1e34f5e178794 Nostr public key (npub): npub1ywzwtnzeh64l560a9j9q5h64pf4wvencv2nn0x4h0zw2x76g8vrq68cmyz Nostr public key (hex): keep this -> 2384e5cc59beabfa69fd2c8a0a5f550a6ae6667862a7379ab7789ca37b483b06
You need to keep Nostr private key (nsec) & Nostr public key (npub). 3) Open (nano/vi) the .env file located in the current folder and fill all the required info :
```console
traefik
EMAIL=pastagringo@fractalized.net <-- replace with your own domain NSEC_ROOT_DOMAIN=plebes.ovh <-- replace with your own domain <-- replace with your own relay domain RELAY_DOMAIN=relay.plebes.ovh <-- replace with your own noauth domainay.plebes.ovh <-- replace with your own relay domain <-- replace with your own noauth domain NOAUTH_DOMAIN=noauth.plebes.ovh <-- replace with your own noauth domain NOAUTHD_DOMAIN=noauthd.plebes.ovh <-- replace with your own noauth domain
noauth
APP_WEB_PUSH_PUBKEY=BGVa7TMQus_KVn7tAwPkpwnU_bpr1i6B7D_3TT-AwkPlPd5fNcZsoCkJkJylVOn7kZ-9JZLpyOmt7U9rAtC-zeg <-- replace with your own web push public key APP_NOAUTHD_URL=https://$NOAUTHD_DOMAIN APP_DOMAIN=$NSEC_ROOT_DOMAIN APP_RELAY=wss://$RELAY_DOMAIN
noauthd
PUSH_PUBKEY=$APP_WEB_PUSH_PUBKEY PUSH_SECRET=_Sz8wgp56KERD5R4Zj5rX_owrWQGyHDyY4Pbf5vnFU0 <-- replace with your own web push private key ORIGIN=https://$NOAUTHD_DOMAIN DATABASE_URL=file:./prod.db BUNKER_NSEC=nsec1f43635rzv6lsazzsl3hfsrum9u8chn3pyjez5qx0ypxl28lcar2suy6hgn <-- replace with your the bunker nsec key BUNKER_RELAY=wss://$RELAY_DOMAIN BUNKER_DOMAIN=$NSEC_ROOT_DOMAIN BUNKER_ORIGIN=https://$NOAUTH_DOMAIN ```
Be aware of noauth and noauthd (the d letter). Next, save and quit. 4) You now need to modify the nostr.json file used for the NIP05 to indicate which relay your bunker will use. You need to set the bunker HEX PUBLIC KEY (I replaced the info with the one I get from NostrTool before) :
console nano easy-nip5/nostr.json
console { "names": { "_": "ServerHexPubKey" }, "nip46": { "ServerHexPubKey": [ "wss://ReplaceWithYourRelayDomain" ] } }
5) You can now run the docker compose file by running the command (first run can take a bit of time because the noauth container needs to build the npm project):
console $ docker compose up -d
6) Before creating our first user into the Nostr Bunker, we need to test if all the required services are up. You should have :
noauth :
noauthd :
console CANNOT GET /
https://noauthd.yourdomain.tld/name :
console { "error": "Specify npub" }
https://yourdomain.tld/.well-known/nostr.json :
console { "names": { "_": "ServerHexPubKey" }, "nip46": { "ServerHexPubKey": [ "wss://ReplaceWithYourRelayDomain" ] } }
If you have everything working, we can try to create a new user!
7) Connect to noauth and click on "Get Started" :
At the bottom the screen, click on "Sign up" :
Fill a username and click on "Create account" :
If everything has been correctly configured, you should see a pop message with "Account created for "XXXX" :
PS : to know if noauthd is well serving the nostr.json file, you can check this URL : https://yourdomain.tld/.well-known/nostr.json?name=YourUser You should see that the user has now NIP05/NIP46 entries :
If the user creation failed, you'll see a red pop-up saying "Something went wrong!" :
To understand what happened, you need to inspect the web page to find the error :
For the example, I tried to recreate a user "jack" which has already been created. You may find a lot of different errors depending of the configuration you made. You can find that the relay is not reachable on w s s : / /, you can find that the noauthd is not accessible too, etc. Every answers should be in this place.
To completely finish the tests, you need to enable the browser notifications, otherwise you won't see the pop-up when you'll logon on Nostr web client, by clicking on "Enable background service" :
You need to click on allow notifications :
Should see this green confirmation popup on top right of your screen:
Well... Everything works now !
8) You try to use your brand new proxyfied npub by clicking on "Connect App" and buy copying your bunker URL :
You can now to for instance on Nostrudel Nostr web client to login with it. Select the relays you want (Popular is better ; if you don't have multiple relay configured on your Nostr profile, avoid "Login to use your relay") :
Click on "Sign in" :
Click on "Show Advanced" :
Click on "Nostr connect / Bunker" :
Paste your bunker URL and click on "Connect" :
The first time, tour browser (Chrome here) may blocks the popup, you need to allow it :
If the browser blocked the popup, NoStrudel will wait your confirmation to login :
You have to go back on your bunker URL to allow the NoStrudel connection request by clicking on on "Connect":
The first time connections may be a bit annoying with all the popup authorizations but once it's done, you can forget them it will connect without any issue. Congrats ! You are connected on NoStrudel with an npub proxyfied key !⚡
You can check to which applications you gave permissions and activity history in noauth by selecting your user. :
If you want to import your real Nostr profile, the one that everyone knows, you can import your nsec key by adding a new account and select "Import key" and adding your precious nsec key (reminder: your nsec key stays in your browser! The noauth provider won't have access to it!) :
You can see can that my profile picture has been retrieved and updated into noauth :
I can now use this new pubkey attached my nsec.app server to login in NoStrudel again :
Accounts/keys management in noauthd You can list created keys in your bunkerd by doing these command (CTRL+C to exit) :
console $ docker exec -it noauthd node src/index.js list_names [ '/usr/local/bin/node', '/noauthd/src/index.js', 'list_names' ] 1 jack npub1hjdw2y0t44q4znzal2nxy7vwmpv3qwrreu48uy5afqhxkw6d2nhsxt7x6u 1708173927920n 2 peter npub1yp752u5tr5v5u74kadrzgfjz2lsmyz8dyaxkdp4e0ptmaul4cyxsvpzzjz 1708174748972n 3 john npub1xw45yuvh5c73sc5fmmc3vf2zvmtrzdmz4g2u3p2j8zcgc0ktr8msdz6evs 1708174778968n 4 johndoe npub1xsng8c0lp9dtuan6tkdljy9q9fjdxkphvhj93eau07rxugrheu2s38fuhr 1708174831905n
If you want to delete someone key, you have to do :
```console $ docker exec -it noauthd node src/index.js delete_name johndoe [ '/usr/local/bin/node', '/noauthd/src/index.js', 'delete_name', 'johndoe' ] deleted johndoe { id: 4, name: 'johndoe', npub: 'npub1xsng8c0lp9dtuan6tkdljy9q9fjdxkphvhj93eau07rxugrheu2s38fuhr', timestamp: 1708174831905n
$ docker exec -it noauthd node src/index.js list_names [ '/usr/local/bin/node', '/noauthd/src/index.js', 'list_names' ] 1 jack npub1hjdw2y0t44q4znzal2nxy7vwmpv3qwrreu48uy5afqhxkw6d2nhsxt7x6u 1708173927920n 2 peter npub1yp752u5tr5v5u74kadrzgfjz2lsmyz8dyaxkdp4e0ptmaul4cyxsvpzzjz 1708174748972n 3 john npub1xw45yuvh5c73sc5fmmc3vf2zvmtrzdmz4g2u3p2j8zcgc0ktr8msdz6evs 1708174778968n ```
It could be pretty easy to create a script to handle the management of keys but I think @Brugeman may create a web interface for that. Noauth is still very young, changes are committed everyday to fix/enhance the application! As """everything""" is stored locally on your browser, you have to clear the cache of you bunker noauth URL to clean everything. This Chome extension is very useful for that. Check these settings in the extension option :
You can now enjoy even more Nostr ⚡ See you soon in another Fractalized story!
-
@ ee11a5df:b76c4e49
2023-11-09 05:20:37A lot of terms have been bandied about regarding relay models: Gossip relay model, outbox relay model, and inbox relay model. But this term "relay model" bothers me. It sounds stuffy and formal and doesn't actually describe what we are talking about very well. Also, people have suggested maybe there are other relay models. So I thought maybe we should rethink this all from first principles. That is what this blog post attempts to do.
Nostr is notes and other stuff transmitted by relays. A client puts an event onto a relay, and subsequently another client reads that event. OK, strictly speaking it could be the same client. Strictly speaking it could even be that no other client reads the event, that the event was intended for the relay (think about nostr connect). But in general, the reason we put events on relays is for other clients to read them.
Given that fact, I see two ways this can occur:
1) The reader reads the event from the same relay that the writer wrote the event to (this I will call relay rendezvous), 2) The event was copied between relays by something.
This second solution is perfectly viable, but it less scalable and less immediate as it requires copies which means that resources will be consumed more rapidly than if we can come up with workable relay rendezvous solutions. That doesn't mean there aren't other considerations which could weigh heavily in favor of copying events. But I am not aware of them, so I will be discussing relay rendezvous.
We can then divide the relay rendezvous situation into several cases: one-to-one, one-to-many, and one-to-all, where the many are a known set, and the all are an unbounded unknown set. I cannot conceive of many-to-anything for nostr so we will speak no further of it.
For a rendezvous to take place, not only do the parties need to agree on a relay (or many relays), but there needs to be some way that readers can become aware that the writer has written something.
So the one-to-one situation works out well by the writer putting the message onto a relay that they know the reader checks for messages on. This we call the INBOX model. It is akin to sending them an email into their inbox where the reader checks for messages addressed to them.
The one-to-(known)-many model is very similar, except the writer has to write to many people's inboxes. Still we are dealing with the INBOX model.
The final case, one-to-(unknown)-all, there is no way the writer can place the message into every person's inbox because they are unknown. So in this case, the writer can write to their own OUTBOX, and anybody interested in these kinds of messages can subscribe to the writer's OUTBOX.
Notice that I have covered every case already, and that I have not even specified what particular types of scenarios call for one-to-one or one-to-many or one-to-all, but that every scenario must fit into one of those models.
So that is basically it. People need INBOX and OUTBOX relays and nothing else for relay rendezvous to cover all the possible scenarios.
That is not to say that other kinds of concerns might not modulate this. There is a suggestion for a DM relay (which is really an INBOX but with a special associated understanding), which is perfectly fine by me. But I don't think there are any other relay models. There is also the case of a live event where two parties are interacting over the same relay, but in terms of rendezvous this isn't a new case, it is just that the shared relay is serving as both parties' INBOX (in the case of a closed chat) and/or both parties' OUTBOX (in the case of an open one) at the same time.
So anyhow that's my thinking on the topic. It has become a fairly concise and complete set of concepts, and this makes me happy. Most things aren't this easy.
-
@ b12b632c:d9e1ff79
2023-08-08 00:02:31"Welcome to the Bitcoin Lightning Bolt Card, the world's first Bitcoin debit card. This revolutionary card allows you to easily and securely spend your Bitcoin at lightning compatible merchants around the world." Bolt Card
I discovered few days ago the Bolt Card and I need to say that's pretty amazing. Thinking that we can pay daily with Bitcoin Sats in the same way that we pay with our Visa/Mastecard debit cards is really something huge⚡(based on the fact that sellers are accepting Bitcoins obviously!)
To use Bolt Card you have three choices :
- Use their (Bolt Card) own Bolt Card HUB and their own BTC Lightning node
- Use your own self hosted Bolt Card Hub and an external BTC Lightning node
- Use your own self hosted Bolt Card Hub and your BTC Lightning node (where you shoud have active Lightning channels)
⚡ The first choice is the quickiest and simpliest way to have an NFC Bolt Card. It will take you few seconds (for real). You'll have to wait much longer to receive your NFC card from a website where you bought it than configure it with Bolt Card services.
⚡⚡ The second choice is pretty nice too because you won't have a VPS + to deal with all the BTC Lightnode stuff but you'll use an external one. From the Bolt Card tutorial about Bolt Card Hub, they use a Lightning from voltage.cloud and I have to say that their services are impressive. In few seconds you'll have your own Lightning node and you'll be able to configure it into the Bolt Card Hub settings. PS : voltage.cloud offers 7 trial days / 20$ so don't hesitate to try it!
⚡⚡⚡ The third one is obvisouly a bit (way) more complex because you'll have to provide a VPS + Bitcoin node and a Bitcoin Lightning Node to be able to send and receive Lightning payments with your Bolt NFC Card. So you shoud already have configured everything by yourself to follow this tutorial. I will show what I did for my own installation and all my nodes (BTC & Lightning) are provided by my home Umbrel node (as I don't want to publish my nodes directly on the clearnet). We'll see how to connect to the Umbrel Lighting node later (spoiler: Tailscale).
To resume in this tutorial, I have :
- 1 Umbrel node (rpi4b) with BTC and Lightning with Tailscale installed.
- 1 VPS (Virtual Personal Server) to publish publicly the Bolt Card LNDHub and Bolt Card containers configured the same way as my other containers (with Nginx Proxy Manager)
Ready? Let's do it ! ⚡
Configuring Bolt Card & Bolt Card LNDHub
Always good to begin by reading the bolt card-lndhub-docker github repo. To a better understading of all the components, you can check this schema :
We'll not use it as it is because we'll skip the Caddy part because we already use Nginx Proxy Manager.
To begin we'll clone all the requested folders :
git clone https://github.com/boltcard/boltcard-lndhub-docker bolthub cd bolthub git clone https://github.com/boltcard/boltcard-lndhub BoltCardHub git clone https://github.com/boltcard/boltcard.git git clone https://github.com/boltcard/boltcard-groundcontrol.git GroundControl
PS : we won't see how to configure GroundControl yet. This article may be updated later.
We now need to modify the settings file with our own settings :
mv .env.example .env nano .env
You need to replace "your-lnd-node-rpc-address" by your Umbrel TAILSCALE ip address (you can find your Umbrel node IP from your Tailscale admin console):
``` LND_IP=your-lnd-node-rpc-address # <- UMBREL TAILSCALE IP ADDRESS LND_GRPC_PORT=10009 LND_CERT_FILE=tls.cert LND_ADMIN_MACAROON_FILE=admin.macaroon REDIS_PASSWORD=random-string LND_PASSWORD=your-lnd-node-unlock-password
docker-compose.yml only
GROUNDCONTROL=ground-control-url
docker-compose-groundcontrol.yml only
FCM_SERVER_KEY=hex-encoded APNS_P8=hex-encoded APNS_P8_KID=issuer-key-which-is-key-ID-of-your-p8-file APPLE_TEAM_ID=team-id-of-your-developer-account BITCOIN_RPC=bitcoin-rpc-url APNS_TOPIC=app-package-name ```
We now need to generate an AES key and insert it into the "settings.sql" file :
```
hexdump -vn 16 -e '4/4 "%08x" 1 "\n"' /dev/random 19efdc45acec06ad8ebf4d6fe50412d0 nano settings.sql ```
- Insert the AES between ' ' right from 'AES_DECRYPT_KEY'
- Insert your domain or subdomain (subdomain in my case) host between ' ' from 'HOST_DOMAIN'
- Insert your Umbrel tailscale IP between ' ' from 'LN_HOST'
Be aware that this subdomain won't be the LNDHub container (boltcard_hub:9002) but the Boltcard container (boltcard_main:9000)
``` \c card_db;
DELETE FROM settings;
-- at a minimum, the settings marked 'set this' must be set for your system -- an explanation for each of the bolt card server settings can be found here -- https://github.com/boltcard/boltcard/blob/main/docs/SETTINGS.md
INSERT INTO settings (name, value) VALUES ('LOG_LEVEL', 'DEBUG'); INSERT INTO settings (name, value) VALUES ('AES_DECRYPT_KEY', '19efdc45acec06ad8ebf4d6fe50412d0'); -- set this INSERT INTO settings (name, value) VALUES ('HOST_DOMAIN', 'sub.domain.tld'); -- set this INSERT INTO settings (name, value) VALUES ('MIN_WITHDRAW_SATS', '1'); INSERT INTO settings (name, value) VALUES ('MAX_WITHDRAW_SATS', '1000000'); INSERT INTO settings (name, value) VALUES ('LN_HOST', ''); -- set this INSERT INTO settings (name, value) VALUES ('LN_PORT', '10009'); INSERT INTO settings (name, value) VALUES ('LN_TLS_FILE', '/boltcard/tls.cert'); INSERT INTO settings (name, value) VALUES ('LN_MACAROON_FILE', '/boltcard/admin.macaroon'); INSERT INTO settings (name, value) VALUES ('FEE_LIMIT_SAT', '10'); INSERT INTO settings (name, value) VALUES ('FEE_LIMIT_PERCENT', '0.5'); INSERT INTO settings (name, value) VALUES ('LN_TESTNODE', ''); INSERT INTO settings (name, value) VALUES ('FUNCTION_LNURLW', 'ENABLE'); INSERT INTO settings (name, value) VALUES ('FUNCTION_LNURLP', 'ENABLE'); INSERT INTO settings (name, value) VALUES ('FUNCTION_EMAIL', 'DISABLE'); INSERT INTO settings (name, value) VALUES ('AWS_SES_ID', ''); INSERT INTO settings (name, value) VALUES ('AWS_SES_SECRET', ''); INSERT INTO settings (name, value) VALUES ('AWS_SES_EMAIL_FROM', ''); INSERT INTO settings (name, value) VALUES ('EMAIL_MAX_TXS', ''); INSERT INTO settings (name, value) VALUES ('FUNCTION_LNDHUB', 'ENABLE'); INSERT INTO settings (name, value) VALUES ('LNDHUB_URL', 'http://boltcard_hub:9002'); INSERT INTO settings (name, value) VALUES ('FUNCTION_INTERNAL_API', 'ENABLE'); ```
You now need to get two files used by Bolt Card LND Hub, the admin.macaroon and tls.cert files from your Umbrel BTC Ligtning node. You can get these files on your Umbrel node at these locations :
/home/umbrel/umbrel/app-data/lightning/data/lnd/tls.cert /home/umbrel/umbrel/app-data/lightning/data/lnd/data/chain/bitcoin/mainnet/admin.macaroon
You can use either WinSCP, scp or ssh to copy these files to your local workstation and copy them again to your VPS to the root folder "bolthub".
You shoud have all these files into the bolthub directory :
johndoe@yourvps:~/bolthub$ ls -al total 68 drwxrwxr-x 6 johndoe johndoe 4096 Jul 30 00:06 . drwxrwxr-x 3 johndoe johndoe 4096 Jul 22 00:52 .. -rw-rw-r-- 1 johndoe johndoe 482 Jul 29 23:48 .env drwxrwxr-x 8 johndoe johndoe 4096 Jul 22 00:52 .git -rw-rw-r-- 1 johndoe johndoe 66 Jul 22 00:52 .gitignore drwxrwxr-x 11 johndoe johndoe 4096 Jul 22 00:52 BoltCardHub -rw-rw-r-- 1 johndoe johndoe 113 Jul 22 00:52 Caddyfile -rw-rw-r-- 1 johndoe johndoe 173 Jul 22 00:52 CaddyfileGroundControl drwxrwxr-x 6 johndoe johndoe 4096 Jul 22 00:52 GroundControl -rw-rw-r-- 1 johndoe johndoe 431 Jul 22 00:52 GroundControlDockerfile -rw-rw-r-- 1 johndoe johndoe 1913 Jul 22 00:52 README.md -rw-rw-r-- 1 johndoe johndoe 293 May 6 22:24 admin.macaroon drwxrwxr-x 16 johndoe johndoe 4096 Jul 22 00:52 boltcard -rw-rw-r-- 1 johndoe johndoe 3866 Jul 22 00:52 docker-compose-groundcontrol.yml -rw-rw-r-- 1 johndoe johndoe 2985 Jul 22 00:57 docker-compose.yml -rw-rw-r-- 1 johndoe johndoe 1909 Jul 29 23:56 settings.sql -rw-rw-r-- 1 johndoe johndoe 802 May 6 22:21 tls.cert
We need to do few last tasks to ensure that Bolt Card LNDHub will work perfectly.
It's maybe already the case on your VPS but your user should be member of the docker group. If not, you can add your user by doing :
sudo groupadd docker sudo usermod -aG docker ${USER}
If you did these commands, you need to logout and login again.
We also need to create all the docker named volumes by doing :
docker volume create boltcard_hub_lnd docker volume create boltcard_redis
Configuring Nginx Proxy Manager to proxify Bolt Card LNDHub & Boltcard
You need to have followed my previous blog post to fit with the instructions above.
As we use have the Bolt Card LNDHub docker stack in another directory than we other services and it has its own docker-compose.yml file, we'll have to configure the docker network into the NPM (Nginx Proxy Manager) docker-compose.yml to allow NPM to communicate with the Bolt Card LNDHub & Boltcard containers.
To do this we need to add these lines into our NPM external docker-compose (not the same one that is located into the bolthub directory, the one used for all your other containers) :
nano docker-compose.yml
networks: bolthub_boltnet: name: bolthub_boltnet external: true
Be careful, "bolthub" from "bolthub_boltnet" is based on the directory where Bolt Card LNDHub Docker docker-compose.yml file is located.
We also need to attach this network to the NPM container :
nginxproxymanager: container_name: nginxproxymanager image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt networks: - fractalized - bolthub_boltnet
You can now recreate the NPM container to attach the network:
docker compose up -d
Now, you'll have to create 2 new Proxy Hosts into NPM admin UI. First one for your domain / subdomain to the Bolt Card LNDHub GUI (boltcard_hub:9002) :
And the second one for the Boltcard container (boltcard_main:9000).
In both Proxy Host I set all the SSL options and I use my wildcard certificate but you can generate one certificate for each Proxy Host with Force SSL, HSTS enabled, HTTP/2 Suppot and HSTS Subdomains enabled.
Starting Bolt Card LNDHub & BoltCard containers
Well done! Everything is setup, we can now start the Bolt Card LNDHub & Boltcard containers !
You need to go again to the root folder of the Bolt Card LNDHub projet "bolthub" and start the docker compose stack. We'll begin wihtout a "-d" to see if we have some issues during the containers creation :
docker compose up
I won't share my containers logs to avoid any senstive information disclosure about my Bolt Card LNDHub node, but you can see them from the Bolt Card LNDHub Youtube video (link with exact timestamp where it's shown) :
If you have some issues about files mounting of admin.macaroon or tls.cert because you started the docker compose stack the first time without the files located in the bolthub folder do :
docker compose down && docker compose up
After waiting few seconds/minutes you should go to your Bolt Card LNDHub Web UI domain/sudomain (created earlier into NPM) and you should see the Bolt Card LNDHub Web UI :
if everything is OK, you now run the containers in detached mode :
docker compose up -d
Voilààààà ⚡
If you need to all the Bolt Card LNDHub logs you can use :
docker compose logs -f --tail 30
You can now follow the video from Bolt Card to configure your Bolt Card NFC card and using your own Bolt Card LNDHub :
~~PS : there is currently a bug when you'll click on "Connect Bolt Card" from the Bold Card Walle app, you might have this error message "API error: updateboltcard: enable_pin is not a valid boolean (code 6)". It's a know issue and the Bolt Card team is currently working on it. You can find more information on their Telegram~~
Thanks to the Bolt Card, the issue has been corrected : changelog
See you soon in another Fractalized story!
-
@ 3bf0c63f:aefa459d
2024-03-23 08:57:08Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 3bf0c63f:aefa459d
2024-03-19 14:32:01Censorship-resistant relay discovery in Nostr
In Nostr is not decentralized nor censorship-resistant I said Nostr is centralized. Peter Todd thinks it is centralized by design, but I disagree.
Nostr wasn't designed to be centralized. The idea was always that clients would follow people in the relays they decided to publish to, even if it was a single-user relay hosted in an island in the middle of the Pacific ocean.
But the Nostr explanations never had any guidance about how to do this, and the protocol itself never had any enforcement mechanisms for any of this (because it would be impossible).
My original idea was that clients would use some undefined combination of relay hints in reply tags and the (now defunct)
kind:2
relay-recommendation events plus some form of manual action ("it looks like Bob is publishing on relay X, do you want to follow him there?") to accomplish this. With the expectation that we would have a better idea of how to properly implement all this with more experience, Branle, my first working client didn't have any of that implemented, instead it used a stupid static list of relays with read/write toggle -- although it did publish relay hints and kept track of those internally and supportedkind:2
events, these things were not really useful.Gossip was the first client to implement a truly censorship-resistant relay discovery mechanism that used NIP-05 hints (originally proposed by Mike Dilger) relay hints and
kind:3
relay lists, and then with the simple insight of NIP-65 that got much better. After seeing it in more concrete terms, it became simpler to reason about it and the approach got popularized as the "gossip model", then implemented in clients like Coracle and Snort.Today when people mention the "gossip model" (or "outbox model") they simply think about NIP-65 though. Which I think is ok, but too restrictive. I still think there is a place for the NIP-05 hints,
nprofile
andnevent
relay hints and specially relay hints in event tags. All these mechanisms are used together in ZBD Social, for example, but I believe also in the clients listed above.I don't think we should stop here, though. I think there are other ways, perhaps drastically different ways, to approach content propagation and relay discovery. I think manual action by users is underrated and could go a long way if presented in a nice UX (not conceived by people that think users are dumb animals), and who knows what. Reliance on third-parties, hardcoded values, social graph, and specially a mix of multiple approaches, is what Nostr needs to be censorship-resistant and what I hope to see in the future.
-
@ 3bf0c63f:aefa459d
2024-01-29 02:19:25Nostr: a quick introduction, attempt #1
Nostr doesn't have a material existence, it is not a website or an app. Nostr is just a description what kind of messages each computer can send to the others and vice-versa. It's a very simple thing, but the fact that such description exists allows different apps to connect to different servers automatically, without people having to talk behind the scenes or sign contracts or anything like that.
When you use a Nostr client that is what happens, your client will connect to a bunch of servers, called relays, and all these relays will speak the same "language" so your client will be able to publish notes to them all and also download notes from other people.
That's basically what Nostr is: this communication layer between the client you run on your phone or desktop computer and the relay that someone else is running on some server somewhere. There is no central authority dictating who can connect to whom or even anyone who knows for sure where each note is stored.
If you think about it, Nostr is very much like the internet itself: there are millions of websites out there, and basically anyone can run a new one, and there are websites that allow you to store and publish your stuff on them.
The added benefit of Nostr is that this unified "language" that all Nostr clients speak allow them to switch very easily and cleanly between relays. So if one relay decides to ban someone that person can switch to publishing to others relays and their audience will quickly follow them there. Likewise, it becomes much easier for relays to impose any restrictions they want on their users: no relay has to uphold a moral ground of "absolute free speech": each relay can decide to delete notes or ban users for no reason, or even only store notes from a preselected set of people and no one will be entitled to complain about that.
There are some bad things about this design: on Nostr there are no guarantees that relays will have the notes you want to read or that they will store the notes you're sending to them. We can't just assume all relays will have everything — much to the contrary, as Nostr grows more relays will exist and people will tend to publishing to a small set of all the relays, so depending on the decisions each client takes when publishing and when fetching notes, users may see a different set of replies to a note, for example, and be confused.
Another problem with the idea of publishing to multiple servers is that they may be run by all sorts of malicious people that may edit your notes. Since no one wants to see garbage published under their name, Nostr fixes that by requiring notes to have a cryptographic signature. This signature is attached to the note and verified by everybody at all times, which ensures the notes weren't tampered (if any part of the note is changed even by a single character that would cause the signature to become invalid and then the note would be dropped). The fix is perfect, except for the fact that it introduces the requirement that each user must now hold this 63-character code that starts with "nsec1", which they must not reveal to anyone. Although annoying, this requirement brings another benefit: that users can automatically have the same identity in many different contexts and even use their Nostr identity to login to non-Nostr websites easily without having to rely on any third-party.
To conclude: Nostr is like the internet (or the internet of some decades ago): a little chaotic, but very open. It is better than the internet because it is structured and actions can be automated, but, like in the internet itself, nothing is guaranteed to work at all times and users many have to do some manual work from time to time to fix things. Plus, there is the cryptographic key stuff, which is painful, but cool.
-
@ ee11a5df:b76c4e49
2023-07-29 03:27:23Gossip: The HTTP Fetcher
Gossip is a desktop nostr client. This post is about the code that fetches HTTP resources.
Gossip fetches HTTP resources. This includes images, videos, nip05 json files, etc. The part of gossip that does this is called the fetcher.
We have had a fetcher for some time, but it was poorly designed and had problems. For example, it was never expiring items in the cache.
We've made a lot of improvements to the fetcher recently. It's pretty good now, but there is still room for improvement.
Caching
Our fetcher caches data. Each URL that is fetched is hashed, and the content is stored under a file in the cache named by that hash.
If a request is in the cache, we don't do an HTTP request, we serve it directly from the cache.
But cached data gets stale. Sometimes resources at a URL change. We generally check resources again after three days.
We save the server's ETag value for content, and when we check the content again we supply an If-None-Match header with the ETag so the server could respond with 304 Not Modified in which case we don't need to download the resource again, we just bump the filetime to now.
In the event that our cache data is stale, but the server gives us an error, we serve up the stale data (stale is better than nothing).
Queueing
We used to fire off HTTP GET requests as soon as we knew that we needed a resource. This was not looked on too kindly by servers and CDNs who were giving us either 403 Forbidden or 429 Too Many Requests.
So we moved into a queue system. The host is extracted from each URL, and each host is only given up to 3 requests at a time. If we want 29 images from the same host, we only ask for three, and the remaining 26 remain in the queue for next time. When one of those requests completes, we decrement the host load so we know that we can send it another request later.
We process the queue in an infinite loop where we wait 1200 milliseconds between passes. Passes take time themselves and sometimes must wait for a timeout. Each pass fetches potentially multiple HTTP resources in parallel, asynchronously. If we have 300 resources at 100 different hosts, three per host, we could get them all in a single pass. More likely a bunch of resources are at the same host, and we make multiple passes at it.
Timeouts
When we fetch URLs in parallel asynchronously, we wait until all of the fetches complete before waiting another 1200 ms and doing another loop. Sometimes one of the fetches times out. In order to keep things moving, we use short timeouts of 10 seconds for a connect, and 15 seconds for a response.
Handling Errors
Some kinds of errors are more serious than others. When we encounter these, we sin bin the server for a period of time where we don't try fetching from it until a specified period elapses.
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente. - **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados. - **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua. - **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer. - **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos. - **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas. - **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade). - **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis. - **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade. - **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho. - **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho. - **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma. - **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças. - **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família. - **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente. - **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo). - **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas. - **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente). - **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica. - **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar. - **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico. - **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais. - **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil). - **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória. - **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles. - **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas. - **inflação**: a inflação estimula o endividamento e o consumismo. - **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas. - **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16bitcoind
decentralizationIt is better to have multiple curator teams, with different vetting processes and release schedules for
bitcoind
than a single one."More eyes on code", "Contribute to Core", "Everybody should audit the code".
All these points repeated again and again fell to Earth on the day it was discovered that Bitcoin Core developers merged a variable name change from "blacklist" to "blocklist" without even discussing or acknowledging the fact that that innocent pull request opened by a sybil account was a social attack.
After a big lot of people manifested their dissatisfaction with that event on Twitter and on GitHub, most Core developers simply ignored everybody's concerns or even personally attacked people who were complaining.
The event has shown that:
1) Bitcoin Core ultimately rests on the hands of a couple maintainers and they decide what goes on the GitHub repository[^pr-merged-very-quickly] and the binary releases that will be downloaded by thousands; 2) Bitcoin Core is susceptible to social attacks; 2) "More eyes on code" don't matter, as these extra eyes can be ignored and dismissed.
Solution:
bitcoind
decentralizationIf usage was spread across 10 different
bitcoind
flavors, the network would be much more resistant to social attacks to a single team.This has nothing to do with the question on if it is better to have multiple different Bitcoin node implementations or not, because here we're basically talking about the same software.
Multiple teams, each with their own release process, their own logo, some subtle changes, or perhaps no changes at all, just a different name for their
bitcoind
flavor, and that's it.Every day or week or month or year, each flavor merges all changes from Bitcoin Core on their own fork. If there's anything suspicious or too leftist (or perhaps too rightist, in case there's a leftist
bitcoind
flavor), maybe they will spot it and not merge.This way we keep the best of both worlds: all software development, bugfixes, improvements goes on Bitcoin Core, other flavors just copy. If there's some non-consensus change whose efficacy is debatable, one of the flavors will merge on their fork and test, and later others -- including Core -- can copy that too. Plus, we get resistant to attacks: in case there is an attack on Bitcoin Core, only 10% of the network would be compromised. the other flavors would be safe.
Run Bitcoin Knots
The first example of a
bitcoind
software that follows Bitcoin Core closely, adds some small changes, but has an independent vetting and release process is Bitcoin Knots, maintained by the incorruptible Luke DashJr.Next time you decide to run
bitcoind
, run Bitcoin Knots instead and contribute tobitcoind
decentralization!
See also:
[^pr-merged-very-quickly]: See PR 20624, for example, a very complicated change that could be introducing bugs or be a deliberate attack, merged in 3 days without time for discussion.
-
@ ee11a5df:b76c4e49
2023-07-29 03:13:59Gossip: Switching to LMDB
Unlike a number of other nostr clients, Gossip has always cached events and related data in a local data store. Up until recently, SQLite3 has served this purpose.
SQLite3 offers a full ACID SQL relational database service.
Unfortunately however it has presented a number of downsides:
- It is not as parallel as you might think.
- It is not as fast as you might hope.
- If you want to preserve the benefit of using SQL and doing joins, then you must break your objects into columns, and map columns back into objects. The code that does this object-relational mapping (ORM) is not trivial and can be error prone. It is especially tricky when working with different types (Rust language types and SQLite3 types are not a 1:1 match).
- Because of the potential slowness, our UI has been forbidden from direct database access as that would make the UI unresponsive if a query took too long.
- Because of (4) we have been firing off separate threads to do the database actions, and storing the results into global variables that can be accessed by the interested code at a later time.
- Because of (4) we have been caching database data in memory, essentially coding for yet another storage layer that can (and often did) get out of sync with the database.
LMDB offers solutions:
- It is highly parallel.
- It is ridiculously fast when used appropriately.
- Because you cannot run arbitrary SQL, there is no need to represent the fields within your objects separately. You can serialize/deserialize entire objects into the database and the database doesn't care what is inside of the blob (yes, you can do that into an SQLite field, but if you did, you would lose the power of SQL).
- Because of the speed, the UI can look stuff up directly.
- We no longer need to fork separate threads for database actions.
- We no longer need in-memory caches of data. The LMDB data is already in-memory (it is memory mapped) so we just access it directly.
The one obvious downside is that we lose SQL. We lose the query planner. We cannot ask arbitrary question and get answers. Instead, we have to pre-conceive of all the kinds of questions we want to ask, and we have to write code that answers them efficiently. Often this involves building and maintaining indices.
Indices
Let's say I want to look at fiatjaf's posts. How do I efficiently pull out just his recent feed-related events in reverse chronological order? It is easy if we first construct the following index
key: EventKind + PublicKey + ReverseTime value: Event Id
In the above, '+' is just a concatenate operator, and ReverseTime is just some distant time minus the time so that it sorts backwards.
Now I just ask LMDB to start from (EventKind=1 + PublicKey=fiatjaf + now) and scan until either one of the first two fields change, or more like the time field gets too old (e.g. one month ago). Then I do it again for the next event kind, etc.
For a generalized feed, I have to scan a region for each person I follow.
Smarter indexes can be imagined. Since we often want only feed-related event kinds, that can be implicit in an index that only indexes those kinds of events.
You get the idea.
A Special Event Map
At first I had stored events into a K-V database under the Id of the event. Then I had indexes on events that output a set of Ids (as in the example above).
But when it comes to storing and retrieving events, we can go even faster than LMDB.
We can build an append-only memory map that is just a sequence of all the events we have, serialized, and in no particular order. Readers do not need a lock and multiple readers can read simultaneously. Writers will need to acquire a lock to append to the map and there may only be one writer at a time. However, readers can continue reading even while a writer is writing.
We can then have a K-V database that maps Id -> Offset. To get the event you just do a direct lookup in the event memory map at that offset.
The real benefit comes when we have other indexes that yield events, they can yield offsets instead of ids. Then we don't need to do a second lookup from the Id to the Event, we can just look directly at the offset.
Avoiding deserialization
Deserialization has a price. Sometimes it requires memory allocation (if the object is not already linear, e.g. variable lengthed data like strings and vectors are allocated on the heap) which can be very expensive if you are trying to scan 150,000 or so events.
We serialize events (and other objects where we can) with a serialization library called speedy. It does its best to preserve the data much like it is represented in memory, but linearized. Because events start with fixed-length fields, we know the offset into the serialized event where these first fields occur and we can directly extract the value of those fields without deserializing the data before it.
This comes in useful whenever we need to scan a large number of events. Search is the one situation where I know that we must do this. We can search by matching against the content of every feed-related event without fully deserialing any of them.
-
@ 3bf0c63f:aefa459d
2024-03-19 15:35:35Nostr is not decentralized nor censorship-resistant
Peter Todd has been saying this for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr today is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
The top one was published to
wss://nostr.wine
,wss://nos.lol
,wss://pyramid.fiatjaf.com
. The second was published to the relay where I generally publish all my notes to,wss://pyramid.fiatjaf.com
, and that is announced on my NIP-05 file and on my NIP-65 relay list.A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are following me, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on
wss://pyramid.fiatjaf.com
.If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is inherently centralized or that it needs a protocol change to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we drive in the right direction we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
See also:
-
@ 3bf0c63f:aefa459d
2024-03-19 13:07:02Censorship-resistant relay discovery in Nostr
In Nostr is not decentralized nor censorship-resistant I said Nostr is centralized. Peter Todd thinks it is centralized by design, but I disagree.
Nostr wasn't designed to be centralized. The idea was always that clients would follow people in the relays they decided to publish to, even if it was a single-user relay hosted in an island in the middle of the Pacific ocean.
But the Nostr explanations never had any guidance about how to do this, and the protocol itself never had any enforcement mechanisms for any of this (because it would be impossible).
My original idea was that clients would use some undefined combination of relay hints in reply tags and the (now defunct)
kind:2
relay-recommendation events plus some form of manual action ("it looks like Bob is publishing on relay X, do you want to follow him there?") to accomplish this. With the expectation that we would have a better idea of how to properly implement all this with more experience, Branle, my first working client didn't have any of that implemented, instead it used a stupid static list of relays with read/write toggle -- although it did publish relay hints and kept track of those internally and supportedkind:2
events, these things were not really useful.Gossip was the first client to implement a truly censorship-resistant relay discovery mechanism that used NIP-05 hints (originally proposed by Mike Dilger) relay hints and
kind:3
relay lists, and then with the simple insight of NIP-65 that got much better. After seeing it in more concrete terms, it became simpler to reason about it and the approach got popularized as the "gossip model", then implemented in clients like Coracle and Snort.Today when people mention the "gossip model" (or "outbox model") they simply think about NIP-65 though. Which I think is ok, but too restrictive. I still think there is a place for the NIP-05 hints,
nprofile
andnevent
relay hints and specially relay hints in event tags. All these mechanisms are used together in ZBD Social, for example, but I believe also in the clients listed above.I don't think we should stop here, though. I think there are other ways, perhaps drastically different ways, to approach content propagation and relay discovery. I think manual action by users is underrated and could go a long way if presented in a nice UX (not conceived by people that think users are dumb animals), and who knows what. Reliance on third-parties, hardcoded values, social graph, and specially a mix of multiple approaches, is what Nostr needs to be censorship-resistant and what I hope to see in the future.
-
@ ee11a5df:b76c4e49
2023-07-29 02:52:13Gossip: Zaps
Gossip is a desktop nostr client. This post is about the code that lets users send lightning zaps to each other (NIP-57).
Gossip implemented Zaps initially on 20th of June, 2023.
Gossip maintains a state of where zapping is at, one of: None, CheckingLnurl, SeekingAmount, LoadingInvoice, and ReadyToPay.
When you click the zap lightning bolt icon, Gossip moves to the CheckingLnurl state while it looks up the LN URL of the user.
If this is successful, it moves to the SeekingAmount state and presents amount options to the user.
Once a user chooses an amount, it moves to the LoadingInvoice state where it interacts with the lightning node and receives and checks an invoice.
Once that is complete, it moves to the ReadyToPay state, where it presents the invoice as a QR code for the user to scan with their phone. There is also a copy button so they can pay it from their desktop computer too.
Gossip also loads zap receipt events and associates them with the event that was zapped, tallying a zap total on that event. Gossip is unfortunately not validating these receipts very well currently, so fake zap receipts can cause an incorrect total to show. This remains an open issue.
Another open issue is the implementation of NIP-46 Nostr Connect and NIP-47 Wallet Connect.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ b12b632c:d9e1ff79
2023-07-21 19:45:20I love testing every new self hosted app and I can say that Nostr "world" is really good regarding self hosting stuff.
Today I tested a Nostr relay named Strfry.
Strfry is really simple to setup and support a lot's of Nostr NIPs.
Here is the list of what it is able to do :
- Supports most applicable NIPs: 1, 2, 4, 9, 11, 12, 15, 16, 20, 22, 28, 33, 40
- No external database required: All data is stored locally on the filesystem in LMDB
- Hot reloading of config file: No server restart needed for many config param changes
- Zero downtime restarts, for upgrading binary without impacting users
- Websocket compression: permessage-deflate with optional sliding window, when supported by clients
- Built-in support for real-time streaming (up/down/both) events from remote relays, and bulk import/export of events from/to jsonl files
- negentropy-based set reconcilliation for efficient syncing with remote relays
Installation with docker compose (v2)
Spoiler : you need to have a computer with more than 1 (v)Core / 2GB of RAM to build the docker image locally. If not, this below might crash your computer during docker image build. You may need to use a prebuilt strfry docker image.
I assume you've read my first article on Managing domain with Nginx Proxy Manager because I will use the NPM docker compose stack to publish strfry Nostr relay. Without the initial NPM configuration done, it may not work as expected. I'll use the same docker-compose.yml file and folder.
Get back in the "npm-stack" folder :
cd npm-stack
Cloning the strfry github repo locally :
git clone https://github.com/hoytech/strfry.git
Modify the docker-compose file to locate the strfry configuration data outside of the folder repo directory to avoid mistake during futures upgrades (CTRL + X, S & ENTER to quit and save modifications) :
nano docker-compose.yml
You don't have to insert the Nginx Proxy Manager part, you should already have it into the file. If not, check here. You should only have to add the strfry part.
``` version: '3.8' services: # should already be present into the docker-compose.yml app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
strfry-nostr-relay: container_name: strfry build: ./strfry volumes: - ./strfry-data/strfry.conf:/etc/strfry.conf - ./strfry-data/strfry-db:/app/strfry-db
ports is commented by NPM will access through docker internal network
no need to expose strfry port directly to the internet
ports:
- "7777:7777"
```
Before starting the container, we need to customize the strfry configuration file "strfry.conf". We'll copy the strfry configuration file and place it into the "strfry-data" folder to modify it with our own settings :
mkdir strfry-data && cp strfry/strfry.conf strfry-data/
And modify the strfry.conf file with your own settings :
nano strfry-data/strfry.conf
You can modify all the settings you need but the basics settings are :
- bind = "127.0.0.1" --> bind = "0.0.0.0" --> otherwise NPM won't be able to contact the strfry service
-
name = "strfry default" --> name of your nostr relay
-
description = "This is a strfry instance." --> your nostr relay description
-
pubkey = "" --> your pubkey in hex format. You can use the Damu's tool to generate your hex key from your npub key : https://damus.io/key/
-
contact = "" --> your email
``` relay { # Interface to listen on. Use 0.0.0.0 to listen on all interfaces (restart required) bind = "127.0.0.1"
# Port to open for the nostr websocket protocol (restart required) port = 7777 # Set OS-limit on maximum number of open files/sockets (if 0, don't attempt to set) (restart required) nofiles = 1000000 # HTTP header that contains the client's real IP, before reverse proxying (ie x-real-ip) (MUST be all lower-case) realIpHeader = "" info { # NIP-11: Name of this server. Short/descriptive (< 30 characters) name = "strfry default" # NIP-11: Detailed information about relay, free-form description = "This is a strfry instance." # NIP-11: Administrative nostr pubkey, for contact purposes pubkey = "" # NIP-11: Alternative administrative contact (email, website, etc) contact = "" }
```
You can now start the docker strfry docker container :
docker compose up -d
This command will take a bit of time because it will build the strfry docker image locally before starting the container. If your VPS doesn't have lot's of (v)CPU/RAM, it could fail (nothing happening during the docker image build). My VPS has 1 vCore / 2GB of RAM and died few seconds after the build beginning.
If it's the case, you can use prebuilt strfry docker image available on the Docker hub : https://hub.docker.com/search?q=strfry&sort=updated_at&order=desc
That said, otherwise, you should see this :
``` user@vps:~/npm-stack$ docker compose up -d [+] Building 202.4s (15/15) FINISHED
=> [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 724B 0.0s => [internal] load .dockerignore 0.3s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ubuntu:jammy 0.0s => [build 1/7] FROM docker.io/library/ubuntu:jammy 0.4s => [internal] load build context 0.9s => => transferring context: 825.64kB 0.2s => [runner 2/4] WORKDIR /app 1.3s => [build 2/7] WORKDIR /build 1.5s => [runner 3/4] RUN apt update && apt install -y --no-install-recommends liblmdb0 libflatbuffers1 libsecp256k1-0 libb2-1 libzstd1 && rm -rf /var/lib/apt/lists/* 12.4s => [build 3/7] RUN apt update && apt install -y --no-install-recommends git g++ make pkg-config libtool ca-certificates libyaml-perl libtemplate-perl libregexp-grammars-perl libssl-dev zlib1g-dev l 55.5s => [build 4/7] COPY . . 0.9s => [build 5/7] RUN git submodule update --init 2.6s => [build 6/7] RUN make setup-golpe 10.8s => [build 7/7] RUN make -j4 126.8s => [runner 4/4] COPY --from=build /build/strfry strfry 1.3s => exporting to image 0.8s => => exporting layers 0.8s => => writing image sha256:1d346bf343e3bb63da2e4c70521a8350b35a02742dd52b12b131557e96ca7d05 0.0s => => naming to docker.io/library/docker-compose_strfry-nostr-relay 0.0sUse 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 02/02
⠿ Container strfry Started 11.0s ⠿ Container npm-stack-app-1 Running ```You can check if everything is OK with strfry container by checking the container logs :
user@vps:~/npm-stack$ docker logs strfry date time ( uptime ) [ thread name/id ] v| 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| arguments: /app/strfry relay 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| Current dir: /app 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| stderr verbosity: 0 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| ----------------------------------- 2023-07-21 19:26:58.514 ( 0.039s) [main thread ]INFO| CONFIG: Loading config from file: /etc/strfry.conf 2023-07-21 19:26:58.529 ( 0.054s) [main thread ]INFO| CONFIG: successfully installed 2023-07-21 19:26:58.533 ( 0.058s) [Websocket ]INFO| Started websocket server on 0.0.0.0:7777
Now, we have to create the subdomain where strfry Nostr relay will be accessible. You need to connect to your Nginx Proxy Manager admin UI and create a new proxy host with these settings :
"Details" tab (Websockets support is mandatory!, you can replace "strfry" by whatever you like, for instance : mybeautifulrelay.yourdomain.tld)
"Details" tab:
"SSL" tab:
And click on "Save"
If everything is OK, when you go to https://strfry.yourdomain.tld you should see :
To verify if strfry is working properly, you can test it with the (really useful!) website https://nostr.watch. You have to insert your relay URL into the nostr.watch URL like this : https://nostr.watch/relay/strfry.yourdomain.tld
You should see this :
If you are seeing your server as online, readable and writable, you made it ! You can add your Nostr strfry server to your Nostr prefered relay and begin to publish notes ! 🎇
Future work:
Once done, strfry will work like a charm but you may need to have more work to update strfry in the near future. I'm currently working on a bash script that will :
- Updatethe "strfry" folder,
- Backup the "strfry.conf" file,
- Download the latest "strfry.conf" from strfry github repo,
- Inject old configuration settings into the new "strfry.conf" file,
- Compose again the stack (rebuilding the image to get the latest code updates),
- etc.
Tell me if you need the script!
Voilààààà
See you soon in another Fractalized story!
-
@ b12b632c:d9e1ff79
2023-07-21 14:19:38Self hosting web applications comes quickly with the need to deal with HTTPS protocol and SSL certificates. The time where web applications was published over the 80/TCP port without any encryption is totally over. Now we have Let's Encrypt and other free certification authority that lets us play web applications with, at least, the basic minimum security required.
Second part of web self hosting stuff that is really useful is the web proxifycation.
It's possible to have multiple web applications accessible through HTTPS but as we can't use the some port (spoiler: we can) we are forced to have ugly URL as https://mybeautifudomain.tld:8443.
This is where Nginx Proxy Manager (NPM) comes to help us.
NPM, as gateway, will listen on the 443 https port and based on the subdomain you want to reach, it will redirect the network flow to the NPM differents declared backend ports. NPM will also request HTTPS cert for you and let you know when the certificate expires, really useful.
We'll now install NPM with docker compose (v2) and you'll see, it's very easy.
You can find the official NPM setup instructions here.
But before we absolutely need to do something. You need to connect to the registrar where you bought your domain name and go into the zone DNS section.You have to create a A record poing to your VPS IP. That will allow NPM to request SSL certificates for your domain and subdomains.
Create a new folder for the NPM docker stack :
mkdir npm-stack && cd npm-stack
Create a new docker-compose.yml :
nano docker-compose.yml
Paste this content into it (CTRL + X ; Y & ENTER to save/quit) :
``` version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
```
You'll not believe but it's done. NPM docker compose configuration is done.
To start Nginx Proxy Manager with docker compose, you just have to :
docker compose up -d
You'll see :
user@vps:~/tutorials/npm-stack$ docker compose up -d [+] Running 2/2 ✔ Network npm-stack_default Created ✔ Container npm-stack-app-1 Started
You can check if NPM container is started by doing this command :
docker ps
You'll see :
user@vps:~/tutorials/npm-stack$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bc5ea8ac9c8 jc21/nginx-proxy-manager:latest "/init" About a minute ago Up About a minute 0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp npm-stack-app-1
If the command show "Up X minutes" for the npm-stack-app-1, you're good to go! You can access to the NPM admin UI by going to http://YourIPAddress:81.You shoud see :
The default NPM login/password are : admin@example.com/changeme .If the login succeed, you should see a popup asking to edit your user by changing your email password :
And your password :
Click on "Save" to finish the login. To verify if NPM is able to request SSL certificates for you, create first a subdomain for the NPM admin UI : Click on "Hosts" and "Proxy Hosts" :
Followed by "Add Proxy Host"
If you want to access the NPM admin UI with https://admin.yourdomain.tld, please set all the parameters like this (I won't explain each parameters) :
Details tab :
SSL tab :
And click on "Save".
NPM will request the SSL certificate "admin.yourdomain.tld" for you.
If you have an erreor message "Internal Error" it's probably because your domaine DNS zone is not configured with an A DNS record pointing to your VPS IP.
Otherwise you should see (my domain is hidden) :
Clicking on the "Source" URL link "admin.yourdomain.tld" will open a pop-up and, surprise, you should see the NPM admin UI with the URL "https://admin.yourdomain.tld" !
If yes, bravo, everything is OK ! 🎇
You know now how to have a subdomain of your domain redirecting to a container web app. In the next blog post, you'll see how to setup a Nostr relay with NPM ;)
Voilààààà
See you soon in another Fractalized story!
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ b12b632c:d9e1ff79
2023-07-20 20:12:39Self hosting web applications comes quickly with the need to deal with HTTPS protocol and SSL certificates. The time where web applications was published over the 80/TCP port without any encryption is totally over. Now we have Let's Encrypt and other free certification authority that lets us play web applications with, at least, the basic minimum security required.
Second part of web self hosting stuff that is really useful is the web proxifycation.
It's possible to have multiple web applications accessible through HTTPS but as we can't use the some port (spoiler: we can) we are forced to have ugly URL as https://mybeautifudomain.tld:8443.
This is where Nginx Proxy Manager (NPM) comes to help us.
NPM, as gateway, will listen on the 443 https port and based on the subdomain you want to reach, it will redirect the network flow to the NPM differents declared backend ports. NPM will also request HTTPS cert for you and let you know when the certificate expires, really useful.
We'll now install NPM with docker compose (v2) and you'll see, it's very easy.
You can find the official NPM setup instructions here.
But before we absolutely need to do something. You need to connect to the registrar where you bought your domain name and go into the zone DNS section.You have to create a A record poing to your VPS IP. That will allow NPM to request SSL certificates for your domain and subdomains.
Create a new folder for the NPM docker stack :
mkdir npm-stack && cd npm-stack
Create a new docker-compose.yml :
nano docker-compose.yml
Paste this content into it (CTRL + X ; Y & ENTER to save/quit) :
``` version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format
: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP # Uncomment the next line if you uncomment anything in the section # environment: # Uncomment this if you want to change the location of # the SQLite DB file within the container # DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./nginxproxymanager/data:/data - ./nginxproxymanager/letsencrypt:/etc/letsencrypt
```
You'll not believe but it's done. NPM docker compose configuration is done.
To start Nginx Proxy Manager with docker compose, you just have to :
docker compose up -d
You'll see :
user@vps:~/tutorials/npm-stack$ docker compose up -d [+] Running 2/2 ✔ Network npm-stack_default Created ✔ Container npm-stack-app-1 Started
You can check if NPM container is started by doing this command :
docker ps
You'll see :
user@vps:~/tutorials/npm-stack$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bc5ea8ac9c8 jc21/nginx-proxy-manager:latest "/init" About a minute ago Up About a minute 0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp npm-stack-app-1
If the command show "Up X minutes" for the npm-stack-app-1, you're good to go! You can access to the NPM admin UI by going to http://YourIPAddress:81.You shoud see :
The default NPM login/password are : admin@example.com/changeme .If the login succeed, you should see a popup asking to edit your user by changing your email password :
And your password :
Click on "Save" to finish the login. To verify if NPM is able to request SSL certificates for you, create first a subdomain for the NPM admin UI : Click on "Hosts" and "Proxy Hosts" :
Followed by "Add Proxy Host"
If you want to access the NPM admin UI with https://admin.yourdomain.tld, please set all the parameters like this (I won't explain each parameters) :
Details tab :
SSL tab :
And click on "Save".
NPM will request the SSL certificate "admin.yourdomain.tld" for you.
If you have an erreor message "Internal Error" it's probably because your domaine DNS zone is not configured with an A DNS record pointing to your VPS IP.
Otherwise you should see (my domain is hidden) :
Clicking on the "Source" URL link "admin.yourdomain.tld" will open a pop-up and, surprise, you should see the NPM admin UI with the URL "https://admin.yourdomain.tld" !
If yes, bravo, everything is OK ! 🎇
You know now how to have a subdomain of your domain redirecting to a container web app. In the next blog post, you'll see how to setup a Nostr relay with NPM ;)
Voilààààà
See you soon in another Fractalized story!
-
@ b12b632c:d9e1ff79
2023-07-19 00:17:02Welcome to a new Fractalized episode of "it works but I can do better"!
Original blog post from : https://fractalized.ovh/use-ghost-blog-to-serve-your-nostr-nip05-and-lnurl/
Few day ago, I wanted to set my Ghost blog (this one) as root domain for Fractalized instead of the old basic Nginx container that served my NIP05. I succeed to do it with Nginx Proxy Manager (I need to create a blog post because NPM is really awesome) but my NIP05 was down.
Having a bad NIP05 on Amethyst is now in the top tier list of my worst nightmares.
As a reminder, to have a valid NIP05 on your prefered Nostr client, you need to have a file named "nostr.json" in a folder ".well-known" located in the web server root path (subdomain or domain). The URL shoud look like this http://domain.tld/.well-known/nostr.json
PS : this doesn't work with Ghost as explained later. If you are here for Ghost, skip this and go directly to 1).
You should have these info inside the nostr.json file (JSON format) :
{ "names": { "YourUsername": "YourNPUBkey" } }
You can test it directly by going to the nostr.json URL, it should show you the JSON.
It was working like a charm on my previous Nginx and I needed to do it on my new Ghost. I saw on Google that I need to have the .well-known folder into the Ghost theme root folder (In my case I choosed the Solo theme). I created the folder, put my nostr.json file inside it and crossed my fingers. Result : 404 error.
I had to search a lot but it seems that Ghost forbid to server json file from its theme folders. Maybe for security reasons, I don't know. Nevertheless Ghost provide a method to do it (but it requires more than copy/paste some files in some folders) => routes
In the same time that configuring your NIP05, I also wanted to setup an LNURL to be zapped to pastagringo@fractalized.ovh (configured into my Amethyst profile). If you want more infromation on LNURL, it's very well explained here.
Because I want to use my Wallet of Satoshi sats wallet "behind" my LNURL, you'll have to read carefuly the EzoFox's blog post that explains how to retrieve your infos from Alby or WoS.
1) Configuring Ghost routes
Add the new route into the routes.yaml (accessible from "Settings" > "Labs" > "Beta features" > "Routes" ; you need to download the file, modify it and upload it again) to let him know where to look for the nostr.json file.
Here is the content of my routes.yaml file :
``` routes: /.well-known/nostr.json/: template: _nostr-json content_type: application/json /.well-known/lnurlp/YourUsername/: template: _lnurlp-YourUsername content_type: application/json collections: /: permalink: /{slug}/ template: index
taxonomies: tag: /tag/{slug}/ author: /author/{slug}/ ```
=> template : name of the file that ghost will search localy
=> content_type : file type (JSON is required for the NIP05)
The first route is about my NIP05 and the second route is about my LNURL.
2) Creating Ghost route .hbs files
To let Ghost serve our JSON content, I needed to create the two .hbs file filled into the routes.yaml file. These files need to be located into the root directory of the Ghost theme used. For me, it was Solo.
Here is there content of both files :
"_nostr-json.hbs"
{ "names": { "YourUsername": "YourNPUBkey" } }
"_lnurlp-pastagringo.hbs" (in your case, _lnurlp-YourUsername.hbs)
{ "callback":"https://livingroomofsatoshi.com/api/v1/lnurl/payreq/XXXXX", "maxSendable":100000000000, "minSendable":1000, "metadata":"[[\"text/plain\",\"Pay to Wallet of Satoshi user: burlyring39\"],[\"text/identifier\",\"YourUsername@walletofsatoshi.com\"]]", "commentAllowed":32, "tag":"payRequest", "allowsNostr":true, "nostrPubkey":"YourNPUBkey" }
After doing that, you almost finished, you just have to restart your Ghost blog. As I use Ghost from docker container, I just had to restart the container.
To verify if everything is working, you have to go in the routes URL created earlier:
https://YourDomain.tld/.well-known/nostr.json
and
https://YourDomain.tld/.well-known/lnurlp/YourUsername
Both URL should display JSON files from .hbs created ealier.
If it's the case, you can add your NIP05 and LNURL into your Nostr profile client and you should be able to see a valid tick from your domain and to be zapped on your LNURL (sending zaps to your WoS or Alby backend wallet).
Voilààààà
See you soon in another Fractalized story!
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A violência é uma forma de comunicação
A violência é uma forma de comunicação: um serial killer, um pai que bate no filho, uma briga de torcidas, uma sessão de tortura, uma guerra, um assassinato passional, uma briga de bar. Em todos esses se pode enxergar uma mensagem que está tentando ser transmitida, que não foi compreendida pelo outro lado, que não pôde ser expressa, e, quando o transmissor da mensagem sentiu que não podia ser totalmente compreendido em palavras, usou essa outra forma de comunicação.
Quando uma ofensa em um bar descamba para uma briga, por exemplo, o que há é claramente uma tentativa de uma ofensa maior ainda pelo lado do que iniciou a primeira, a briga não teria acontecido se ele a tivesse conseguido expressar em palavras tão claras que toda a audiência de bêbados compreendesse, o que estaria além dos limites da linguagem, naquele caso, o soco com o mão direita foi mais eficiente. Poderia ser também a defesa argumentativa: "eu não sou um covarde como você está dizendo" -- mas o bar não acreditaria nessa frase solta, a comunicação não teria obtido o sucesso desejado.
A explicação para o fato da redução da violência à medida em que houve progresso da civilização está na melhora da eficiência da comunicação humana: a escrita, o refinamento da expressão lingüística, o aumento do alcance da palavra falada com rádio, a televisão e a internet.
Se essa eficiência diminuir, porque não há mais acordo quanto ao significado das palavras, porque as pessoas não estão nem aí para se o que escrevem é bom ou não, ou porque são incapazes de compreender qualquer coisa, deve aumentar proporcionalmente a violência.
-
@ b12b632c:d9e1ff79
2023-07-17 22:09:53Today I created a new item (NIP05) to sell into my BTCpayserver store and everything was OK. The only thing that annoyed me a bit is that when a user pay the invoice to get his NIP on my Nostr Paid Relay, I need to warn him that he needs to fill his npub in the email info required by BTCpayserver before paying. Was not so user friendly. Not at all.
So I checked if BTCpayserver is able to create custom input before paying the invoice and at my big surprise, it's possible ! => release version v1.8
They even use as example my need from Nostr to get the user npub.
So it's possible, yes, but with at least the version 1.8.0. I checked immediately my BTCpayserver version on my Umbrel node and... damn... I'm in 1.7.2. I need to update my local BTCpayserver through Umbrel to access my so wanted BTCpayserver feature. I checked the local App Store but I can't see any "Update" button or something. I can see that the version installed is well the v1.7.2 but when I check the latest version available from the online Umbrel App Store. I can see v1.9.2. Damn bis.
I searched into the Umbrel Community forum, Telegram group, Google posts, asked the question on Nostr and didn't have any clue.
I finally found the answer from the Umbrel App Framework documention :
sudo ./scripts/repo checkout https://github.com/getumbrel/umbrel-apps.git
So I did it and it updated my local Umbrel App Store to the latest version available and my BTCpayserver.
After few seconds, the BTCpaysever local URL is available and I'm allowed to login => everything is OK! \o/
See you soon in another Fractalized story!
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Anglicismos estúpidos no português contemporâneo
Palavras e expressões que ninguém deveria usar porque não têm o sentido que as pessoas acham que têm, são apenas aportuguesamentos de palavras inglesas que por nuances da história têm um sentido ligeiramente diferente em inglês.
Cada erro é acompanhado também de uma sugestão de como corrigi-lo.
Palavras que existem em português com sentido diferente
- submissão (de trabalhos): envio, apresentação
- disrupção: perturbação
- assumir: considerar, pressupor, presumir
- realizar: perceber
- endereçar: tratar de
- suporte (ao cliente): atendimento
- suportar (uma idéia, um projeto): apoiar, financiar
- suportar (uma função, recurso, característica): oferecer, ser compatível com
- literacia: instrução, alfabetização
- convoluto: complicado.
- acurácia: precisão.
- resiliência: resistência.
Aportuguesamentos desnecessários
- estartar: iniciar, começar
- treidar: negociar, especular
Expressões
- "não é sobre...": "não se trata de..."
Ver também
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On HTLCs and arbiters
This is another attempt and conveying the same information that should be in Lightning and its fake HTLCs. It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
- Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
- Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
- MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Problemas com Russell Kirk
A idéia central da “política da prudência[^1]” de Russell Kirk me parece muito correta, embora tenha sido melhor formulada pior no seu enorme livro do que em uma pequena frase do joanadarquista Lucas Souza: “o conservadorismo é importante, porque tem muita gente com idéia errada por aí, e nós podemos não saber distingüi-las”.
Porém, há alguns problemas que precisam ser esclarecidos, ou melhor explicados, e que me impedem de enxergar os seus argumentos como refutação final do meu já tão humilde (embora feroz) anarquismo. São eles:
I Percebo alguma coisa errada, não sei bem onde, entre a afirmação de que toda ideologia é ruim, ou “todas as ideologias causam confusão[^2]”, e a proposta conservadora de “conservar o mundo da ordem que herdamos, ainda que em estado imperfeito, de nossos ancestrais[^3]”. Ora, sem precisar cair em exemplos como o do partido conservador inglês -- que conservava a política inglesa sempre onde estava, e se alternava no governo com o partido trabalhista, que a levava cada vez mais um pouco à esquerda --, está embutida nessa frase, talvez, a idéia, que ao mesmo tempo é clara e ferrenhamente combatida pelos próprios conservadores, de que a história é da humanidade é uma história de progresso linear rumo a uma situação melhor.
Querer conservar o mundo da ordem que herdamos significa conservar também os vários erros que podem ter sido cometidos pelos nossos ancestrais mais recentes, e conservá-los mesmo assim, acusando toda e qualquer tentativa de propôr soluções a esses erros de ideologia? Ou será que conservar o mundo da ordem é escolher um período determinado que seja tido como o auge da história humana e tentar restaurá-lo em nosso próprio tempo? Não seria isto ideologia?
Ou, ainda, será que conservar o mundo da ordem é selecionar, entre vários períodos do passado, alguns pedaços que o conservador considerar ótimos em cada sociedade, fazer dali uma mistura de sociedade ideal baseada no passado e então tentar implementá-la? Quem saberia dizer quais são as partes certas?
II Sobre a questão do que mantém a sociedade civil coesa, Russell Kirk, opondo-a à posição libertária de que o nexo da sociedade é o autointeresse, declara que a posição conservadora é a de que “a sociedade é uma comunidade de almas, que une os mortos, os vivos e os ainda não nascidos, e que se harmoniza por aquilo que Aristóteles chamou de amizade e os cristãos chamam de caridade ou amor ao próximo”.
Esta é uma posição muito correta, mas me parece estar em contradição com a defesa do Estado que ele faz na mesma página e na seguinte. O que me parece errado é que a sociedade não pode ser, ao mesmo tempo, uma “comunidade baseada no amor ao próximo” e uma comunidade que “requer não somente que as paixões dos indivíduos sejam subjugadas, mas que, mesmo no povo e no corpo social, bem como nos indivíduos, as inclinações dos homens, amiúde, devam ser frustradas, a vontade controlada e as paixões subjugadas” e, pior, que “isso somente pode ser feito por um poder exterior”.
Disto aí podemos tirar que, da mesma forma que Kirk define a posição libertária como sendo a de que o autointeresse é que mantém a sociedade civil coesa, a posição conservadora seria então a de que essa coesão vem apenas do Estado, e não de qualquer ligação entre vivos e mortos, ou do amor ao próximo. Já que, sem o Estado, diz, ele, citando Thomas Hobbes, a condição do homem é “solitária, pobre, sórdida, embrutecida e curta”?
[^1]: este é o nome do livro e também um outro nome que ele dá para o próprio conservadorismo (p.99). [^2]: p. 101 [^3]: p. 102
-
@ 3bf0c63f:aefa459d
2024-03-06 13:04:06início
"Vocês vêem? Vêem a história? Vêem alguma coisa? Me parece que estou tentando lhes contar um sonho -- fazendo uma tentativa inútil, porque nenhum relato de sonho pode transmitir a sensação de sonho, aquela mistura de absurdo, surpresa e espanto numa excitação de revolta tentando se impôr, aquela noção de ser tomado pelo incompreensível que é da própria essência dos sonhos..."
Ele ficou em silêncio por alguns instantes.
"... Não, é impossível; é impossível transmitir a sensação viva de qualquer época determinada de nossa existência -- aquela que constitui a sua verdade, o seu significado, a sua essência sutil e contundente. É impossível. Vivemos, como sonhamos -- sozinhos..."
- Livros mencionados por Olavo de Carvalho
- Antiga homepage Olavo de Carvalho
- Bitcoin explicado de um jeito correto e inteligível
- Reclamações
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A command line utility to create and manage personal graphs, then write them to dot and make images with graphviz.
It manages a bunch of YAML files, one for each entity in the graph. Each file lists the incoming and outgoing links it has (could have listen only the outgoing, now that I'm tihnking about it).
Each run of the tool lets you select from existing nodes or add new ones to generate a single link type from one to one, one to many, many to one or many to many -- then updates the YAML files accordingly.
It also includes a command that generates graphs with graphviz, and it can accept a template file that lets you customize the
dot
that is generated and thus the graphviz graph.rel
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28GraphQL vs REST
Today I saw this: https://github.com/stickfigure/blog/wiki/How-to-(and-how-not-to)-design-REST-APIs
And it reminded me why GraphQL is so much better.
It has also reminded me why HTTP is so confusing and awful as a protocol, especially as a protocol for structured data APIs, with all its status codes and headers and bodies and querystrings and content-types -- but let's not talk about that for now.
People complain about GraphQL being great for frontend developers and bad for backend developers, but I don't know who are these people that apparently love reading guides like the one above of how to properly construct ad-hoc path routers, decide how to properly build the JSON, what to include and in which circumstance, what status codes and headers to use, all without having any idea of what the frontend or the API consumer will want to do with their data.
It is a much less stressful environment that one in which we can just actually perform the task and fit the data in a preexistent schema with types and a structure that we don't have to decide again and again while anticipating with very incomplete knowledge the usage of an extraneous person -- i.e., an environment with GraphQL, or something like GraphQL.
By the way, I know there are some people that say that these HTTP JSON APIs are not the real REST, but that is irrelevant for now.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28LessPass remoteStorage
LessPass is a nice idea: a password manager without any state. Just remember one master password and you can generate a different one for every site using the power of hashes.
But it has a very bad issue: some sites require just numbers, others have a minimum or maximum character limits, some require non-letter characters, uppercase characters, others forbid these and so on.
The solution: to allow you to specify parameters when generating the password so you can fit a generated password on every service.
The problem with the solution: it creates state. Now you must remember what parameters you used when generating a password for each site.
This was a way to store these settings on a remoteStorage bucket. Since it isn't confidential information in any way, that wasn't a problem, and I thought it was a good fit for remoteStorage.
Some time later I realized it maybe would be better to have a centralized repository hosting all weird requirements for passwords each domain forced on its users, and let LessPass use data from that central place when generating a password. Still stateful, not ideal, not very far from a centralized password manager, but still requiring less trust and less cryptographic assumptions.
- https://github.com/fiatjaf/lesspass-remotestorage
- https://addons.mozilla.org/firefox/addon/lesspass-remotestorage/
- https://chrome.google.com/webstore/detail/lesspass-remotestorage/aogdpopejodechblppdkpiimchbmdcmc
- https://lesspass.alhur.es/
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28início
"Vocês vêem? Vêem a história? Vêem alguma coisa? Me parece que estou tentando lhes contar um sonho -- fazendo uma tentativa inútil, porque nenhum relato de sonho pode transmitir a sensação de sonho, aquela mistura de absurdo, surpresa e espanto numa excitação de revolta tentando se impôr, aquela noção de ser tomado pelo incompreensível que é da própria essência dos sonhos..."
Ele ficou em silêncio por alguns instantes.
"... Não, é impossível; é impossível transmitir a sensação viva de qualquer época determinada de nossa existência -- aquela que constitui a sua verdade, o seu significado, a sua essência sutil e contundente. É impossível. Vivemos, como sonhamos -- sozinhos..."
- Livros mencionados por Olavo de Carvalho
- Antiga homepage Olavo de Carvalho
- Bitcoin explicado de um jeito correto e inteligível
- Reclamações
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A estrutura lógica do livro didático
Todos os livros didáticos e cursos expõem seus conteúdos a partir de uma organização lógica prévia, um esquema de todo o conteúdo que julgam relevante, tudo muito organizadinho em tópicos e subtópicos segundo a ordem lógica que mais se aproxima da ordem natural das coisas. Imagine um sumário de um manual ou livro didático.
A minha experiência é a de que esse método serve muito bem para ninguém entender nada. A organização lógica perfeita de um campo de conhecimento é o resultado final de um estudo, não o seu início. As pessoas que escrevem esses manuais e dão esses cursos, mesmo quando sabem do que estão falando (um acontecimento aparentemente raro), o fazem a partir do seu próprio ponto de vista, atingido após uma vida de dedicação ao assunto (ou então copiando outros manuais e livros didáticos, o que eu chutaria que é o método mais comum).
Para o neófito, a melhor maneira de entender algo é através de imersões em micro-tópicos, sem muita noção da posição daquele tópico na hierarquia geral da ciência.
- Revista Educativa, um exemplo de como não ensinar nada às crianças.
- Zettelkasten, a ordem surgindo do caos, ao invés de temas se encaixando numa ordem preexistentes.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28SummaDB
This was a hierarchical database server similar to the original Firebase. Records were stored on a LevelDB on different paths, like:
/fruits/banana/color
:yellow
/fruits/banana/flavor
:sweet
And could be queried by path too, using HTTP, for example, a call to
http://hostname:port/fruits/banana
, for example, would return a JSON document likejson { "color": "yellow", "flavor": "sweet" }
While a call to
/fruits
would returnjson { "banana": { "color": "yellow", "flavor": "sweet" } }
POST
,PUT
andPATCH
requests also worked.In some cases the values would be under a special
"_val"
property to disambiguate them from paths. (I may be missing some other details that I forgot.)GraphQL was also supported as a query language, so a query like
graphql query { fruits { banana { color } } }
would return
{"fruits": {"banana": {"color": "yellow"}}}
.SummulaDB
SummulaDB was a browser/JavaScript build of SummaDB. It ran on the same Go code compiled with GopherJS, and using PouchDB as the storage backend, if I remember correctly.
It had replication between browser and server built-in, and one could replicate just subtrees of the main tree, so you could have stuff like this in the server:
json { "users": { "bob": {}, "alice": {} } }
And then only allow Bob to replicate
/users/bob
and Alice to replicate/users/alice
. I am sure the require auth stuff was also built in.There was also a PouchDB plugin to make this process smoother and data access more intuitive (it would hide the
_val
stuff and allow properties to be accessed directly, today I wouldn't waste time working on these hidden magic things).The computed properties complexity
The next step, which I never managed to get fully working and caused me to give it up because of the complexity, was the ability to automatically and dynamically compute materialized properties based on data in the tree.
The idea was partly inspired on CouchDB computed views and how limited they were, I wanted a thing that would be super powerful, like, given
json { "matches": { "1": { "team1": "A", "team2": "B", "score": "2x1", "date": "2020-01-02" }, "1": { "team1": "D", "team2": "C", "score": "3x2", "date": "2020-01-07" } } }
One should be able to add a computed property at
/matches/standings
that computed the scores of all teams after all matches, for example.I tried to complete this in multiple ways but they were all adding much more complexity I could handle. Maybe it would have worked better on a more flexible and powerful and functional language, or if I had more time and patience, or more people.
Screenshots
This is just one very simple unfinished admin frontend client view of the hierarchical dataset.
- https://github.com/fiatjaf/summadb
- https://github.com/fiatjaf/summuladb
- https://github.com/fiatjaf/pouch-summa
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Zettelkasten
https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125 (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal
neuron
que também gera sites a partir das notas?, acho que vai ser bom. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Boardthreads
This was a very badly done service for turning a Trello list into a helpdesk UI.
Surprisingly, it had more paying users than Websites For Trello, which I was working on simultaneously and dedicating much more time to it.
The Neo4j database I used for this was a very poor choice, it was probably the cause of all the bugs.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The problem with ION
ION is a DID method based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
One thing I know is:
- ION is supposed to provide a way to rotate keys seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also not a blockchain, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that you can't solve the double-spend problem without a central authority or a kind of blockchain (i.e. a decentralized system with deterministic consensus).
- Rotating keys also suffer from the double-spend problem: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Channels without HTLCs
HTLCs below the dust limit are not possible, because they're uneconomical.
So currently whenever a payment below the dust limit is to be made Lightning peers adjust their commitment transactions to pay that amount as fees in case the channel is closed. That's a form of reserving that amount and incentivizing peers to resolve the payment, either successfully (in case it goes to the receiving node's balance) or not (it then goes back to the sender's balance).
SOLUTION
I didn't think too much about if it is possible to do what I think can be done in the current implementation on Lightning channels, but in the context of Eltoo it seems possible.
Eltoo channels have UPDATE transactions that can be published to the blockchain and SETTLEMENT transactions that spend them (after a relative time) to each peer. The barebones script for UPDATE transactions is something like (copied from the paper, because I don't understand these things):
OP_IF # to spend from a settlement transaction (presigned) 10 OP_CSV 2 As,i Bs,i 2 OP_CHECKMULTISIGVERIFY OP_ELSE # to spend from a future update transaction <Si+1> OP_CHECKLOCKTIMEVERIFY 2 Au Bu 2 OP_CHECKMULTISIGVERIFY OP_ENDIF
During a payment of 1 satoshi it could be updated to something like (I'll probably get this thing completely wrong):
OP_HASH256 <payment_hash> OP_EQUAL OP_IF # for B to spend from settlement transaction 1 in case the payment went through # and they have a preimage 10 OP_CSV 2 As,i1 Bs,i1 2 OP_CHECKMULTISIGVERIFY OP_ELSE OP_IF # for A to spend from settlement transaction 2 in case the payment didn't went through # and the other peer is uncooperative <now + 1day> OP_CHECKLOCKTIMEVERIFY 2 As,i2 Bs,i2 2 OP_CHECKMULTISIGVERIFY OP_ELSE # to spend from a future update transaction <Si+1> OP_CHECKLOCKTIMEVERIFY 2 Au Bu 2 OP_CHECKMULTISIGVERIFY OP_ENDIF OP_ENDIF
Then peers would have two presigned SETTLEMENT transactions, 1 and 2 (with different signature pairs, as badly shown in the script). On SETTLEMENT 1, funds are, say, 999sat for A and 1001sat for B, while on SETTLEMENT 2 funds are 1000sat for A and 1000sat for B.
As soon as B gets the preimage from the next peer in the route it can give it to A and them can sign a new UPDATE transaction that replaces the above gimmick with something simpler without hashes involved.
If the preimage doesn't come in viable time, peers can agree to make a new UPDATE transaction anyway. Otherwise A will have to close the channel, which may be bad, but B wasn't a good peer anyway.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28comentário pertinente de Olavo de Carvalho sobre atribuições indevidas de acontecimentos à "ordem espontânea"
Eis aqui um exemplo entre outros mil, extraído das minhas apostilas de aulas, de como se analisam as relações entre fatores deliberados e casuais na ação histórica. O sr, Beltrão está INFINITAMENTE ABAIXO da possibilidade de discutir essas coisas, e por isso mesmo me atribui uma simploriedade que é dele próprio e não minha:
Já citei mil vezes este parágrafo de Georg Jellinek e vou citá-lo de novo: “Os fenômenos da vida social dividem-se em duas classes: aqueles que são determinados essencialmente por uma vontade diretriz e aqueles que existem ou podem existir sem uma organização devida a atos de vontade. Os primeiros estão submetidos necessariamente a um plano, a uma ordem emanada de uma vontade consciente, em oposição aos segundos, cuja ordenação repousa em forças bem diferentes.”
Essa distinção é crucial para os historiadores e os analistas estratégicos não porque ela é clara em todos os casos, mas precisamente porque não o é. O erro mais comum nessa ordem de estudos reside em atribuir a uma intenção consciente aquilo que resulta de uma descontrolada e às vezes incontrolável combinação de forças, ou, inversamente, em não conseguir enxergar, por trás de uma constelação aparentemente fortuita de circunstâncias, a inteligência que planejou e dirigiu sutilmente o curso dos acontecimentos.
Exemplo do primeiro erro são os Protocolos dos Sábios de Sião, que enxergam por trás de praticamente tudo o que acontece de mau no mundo a premeditação maligna de um número reduzidos de pessoas, uma elite judaica reunida secretamente em algum lugar incerto e não sabido.
O que torna essa fantasia especialmente convincente, decorrido algum tempo da sua publicação, é que alguns dos acontecimentos ali previstos se realizam bem diante dos nossos olhos. O leitor apressado vê nisso uma confirmação, saltando imprudentemente da observação do fato à imputação da autoria. Sim, algumas das idéias anunciadas nos Protocolos foram realizadas, mas não por uma elite distintamente judaica nem muito menos em proveito dos judeus, cuja papel na maioria dos casos consistiu eminentemente em pagar o pato. Muitos grupos ricos e poderosos têm ambições de dominação global e, uma vez publicado o livro, que em certos trechos tem lances de autêntica genialidade estratégica de tipo maquiavélico, era praticamente impossível que nada aprendessem com ele e não tentassem por em prática alguns dos seus esquemas, com a vantagem adicional de que estes já vinham com um bode expiatório pré-fabricado. Também é impossível que no meio ou no topo desses grupos não exista nenhum judeu de origem. Basta portanto um pouquinho de seletividade deformante para trocar a causa pelo efeito e o inocente pelo culpado.
Mas o erro mais comum hoje em dia não é esse. É o contrário: é a recusa obstinada de enxergar alguma premeditação, alguma autoria, mesmo por trás de acontecimentos notavelmente convergentes que, sem isso, teriam de ser explicados pela forca mágica das coincidências, pela ação de anjos e demônios, pela "mão invisível" das forças de mercado ou por hipotéticas “leis da História” ou “constantes sociológicas” jamais provadas, que na imaginação do observador dirigem tudo anonimamente e sem intervenção humana.
As causas geradoras desse erro são, grosso modo:
Primeira: Reduzir as ações humanas a efeitos de forças impessoais e anônimas requer o uso de conceitos genéricos abstratos que dão automaticamente a esse tipo de abordagem a aparência de coisa muito científica. Muito mais científica, para o observador leigo, do que a paciente e meticulosa reconstituição histórica das cadeias de fatos que, sob um véu de confusão, remontam às vezes a uma autoria inicial discreta e quase imperceptível. Como o estudo dos fenômenos histórico-políticos é cada vez mais uma ocupação acadêmica cujo sucesso depende de verbas, patrocínios, respaldo na mídia popular e boas relações com o establishment, é quase inevitável que, diante de uma questão dessa ordem, poucos resistam à tentação de matar logo o problema com duas ou três generalizações elegantes e brilhar como sábios de ocasião em vez de dar-se o trabalho de rastreamentos históricos que podem exigir décadas de pesquisa.
Segunda: Qualquer grupo ou entidade que se aventure a ações histórico-políticas de longo prazo tem de possuir não só os meios de empreendê-las, mas também, necessariamente, os meios de controlar a sua repercussão pública, acentuando o que lhe convém e encobrindo o que possa abortar os resultados pretendidos. Isso implica intervenções vastas, profundas e duradouras no ambiente mental. [Etc. etc. etc.]
(no facebook em 17 de julho de 2013)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Um algoritmo imbecil da evolução
Suponha que você queira escrever a palavra BANANA partindo de OOOOOO e usando só alterações aleatórias das letras. As alterações se dão por meio da multiplicação da palavra original em várias outras, cada uma com uma mudança diferente.
No primeiro período, surgem BOOOOO e OOOOZO. E então o ambiente decide que todas as palavras que não começam com um B estão eliminadas. Sobra apenas BOOOOO e o algoritmo continua.
É fácil explicar conceber a evolução das espécies acontecendo dessa maneira, se você controlar sempre a parte em que o ambiente decide quem vai sobrar.
Porém, há apenas duas opções:
- Se o ambiente decidir as coisas de maneira aleatória, a chance de você chegar na palavra correta usando esse método é tão pequena que pode ser considerada nula.
- Se o ambiente decidir as coisas de maneira pensada, caímos no //design inteligente//.
Acredito que isso seja uma enunciação decente do argumento "no free lunch" aplicado à crítica do darwinismo por William Dembski.
A resposta darwinista consiste em dizer que não existe essa BANANA como objetivo final. Que as palavras podem ir se alterando aleatoriamente, e o que sobrar sobrou, não podemos dizer que um objetivo foi atingido ou deixou de sê-lo. E aí os defensores do design inteligente dirão que o resultado ao qual chegamos não pode ter sido fruto de um processo aleatório. BANANA é qualitativamente diferente de AYZOSO, e aí há várias maneiras de "provar" que sim usando modelos matemáticos e tal.
Fico com a impressão, porém, de que essa coisa só pode ser resolvida como sim ou não mediante uma discussão das premissas, e chega um ponto em que não há mais provas matemáticas possíveis, apenas subjetividade.
Daí eu me lembro da minha humilde solução ao problema do cão que aperta as teclas aleatoriamente de um teclado e escreve as obras completas de Shakespeare: mesmo que ele o faça, nada daquilo terá sentido sem uma inteligência de tipo humano ali para lê-las e perceber que não se trata de uma bagunça, mas sim de um texto com sentido para ele. O milagre se dá não no momento em que o cão tropeça no teclado, mas no momento em que o homem olha para a tela.
Se o algoritmo da evolução chegou à palavra BANANA ou UXJHTR não faz diferença pra ela, mas faz diferença para nós, que temos uma inteligência humana, e estamos observando aquilo. O homem também pensaria que há //algo// por trás daquele evento do cão que digita as obras de Shakespeare, e como seria possível alguém em sã consciência pensar que não?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28rosetta.alhur.es
A service that grabs code samples from two chosen languages on RosettaCode and displays them side-by-side.
The code-fetching is done in real time and snippet-by-snippet (there is also a prefetch of which snippets are available in each language, so we only compare apples to apples).
This was my first Golang web application if I remember correctly.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28lnurl-auth explained
You may have seen the lnurl-auth spec or heard about it, but might not know how it works or what is its relationship with other lnurl protocols. This document attempts to solve that.
Relationship between lnurl-auth and other lnurl protocols
First, what is the relationship of lnurl-auth with other lnurl protocols? The answer is none, except the fact that they all share the lnurl format for specifying
https
URLs.In fact, lnurl-auth is very unique in the sense that it doesn't even need a Lightning wallet to work, it is a standalone authentication protocol that can work anywhere.
How does it work
Now, how does it work? The basic idea is that each wallet has a seed, which is a random value (you may think of the BIP39 seed words, for example). Usually from that seed different keys are derived, each of these yielding a Bitcoin address, and also from that same seed may come the keys used to generate and manage Lightning channels.
What lnurl-auth does is to generate a new key from that seed, and from that a new key for each service (identified by its domain) you try to authenticate with.
That way, you effectively have a new identity for each website. Two different services cannot associate your identities.
The flow goes like this: When you visit a website, the website presents you with a QR code containing a callback URL and a challenge. The challenge should be a random value.
When your wallet scans or opens that QR code it uses the domain in the callback URL plus the main lnurl-auth key to derive a key specific for that website, uses that key to sign the challenge and then sends both the public key specific for that for that website plus the signed challenge to the specified URL.
When the service receives the public key it checks it against the challenge signature and start a session for that user. The user is then identified only by its public key. If the service wants it can, of course, request more details from the user, associate it with an internal id or username, it is free to do anything. lnurl-auth's goals end here: no passwords, maximum possible privacy.
FAQ
-
What is the advantage of tying this to Bitcoin and Lightning?
One big advantage is that your wallet is already keeping track of one seed, it is already a precious thing. If you had to keep track of a separate auth seed it would be arguably worse, more difficult to bootstrap the protocol, and arguably one of the reasons similar protocols, past and present, weren't successful.
-
Just signing in to websites? What else is this good for?
No, it can be used for authenticating to installable apps and physical places, as long as there is a service running an HTTP server somewhere to read the signature sent from the wallet. But yes, signing in to websites is the main problem to solve here.
-
Phishing attack! Can a malicious website proxy the QR from a third website and show it to the user to it will steal the signature and be able to login on the third website?
No, because the wallet will only talk to the the callback URL, and it will either be controlled by the third website, so the malicious won't see anything; or it will have a different domain, so the wallet will derive a different key and frustrate the malicious website's plan.
-
I heard SQRL had that same idea and it went nowhere.
Indeed. SQRL in its first version was basically the same thing as lnurl-auth, with one big difference: it was vulnerable to phishing attacks (see above). That was basically the only criticism it got everywhere, so the protocol creators decided to solve that by introducing complexity to the protocol. While they were at it they decided to add more complexity for managing accounts and so many more crap that in the the spec which initially was a single page ended up becoming 136 pages of highly technical gibberish. Then all the initial network effect it had, libraries and apps were trashed and nowadays no one can do anything with it (but, see, there are still people who love the protocol writing in a 90's forum with no clue of anything besides their own Java).
-
We don't need this, we need WebAuthn!
WebAuthn is essentially the same thing as lnurl-auth, but instead of being simple it is complex, instead of being open and decentralized it is centralized in big corporations, and instead of relying on a key generated by your own device it requires an expensive hardware HSM you must buy and trust the manufacturer. If you like WebAuthn and you like Bitcoin you should like lnurl-auth much more.
-
What about BitID?
This is another one that is very similar to lnurl-auth, but without the anti-phishing prevention and extra privacy given by making one different key for each service.
-
What about LSAT?
It doesn't compete with lnurl-auth. LSAT, as far as I understand it, is for when you're buying individual resources from a server, not authenticating as a user. Of course, LSAT can be repurposed as a general authentication tool, but then it will lack features that lnurl-auth has, like the property of having keys generated independently by the user from a common seed and a standard way of passing authentication info from one medium to another (like signing in to a website at the desktop from the mobile phone, for example).
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Gerador de tabelas de todos contra todos
I don't remember exactly when I did this, but I think a friend wanted to do software that would give him money over the internet without having to work. He didn't know how to program. He mentioned this idea he had which was some kind of football championship manager solution, but I heard it like this: a website that generated a round-robin championship table for people to print.
It is actually not obvious to anyone how to do it, it requires an algorithm that people will not reach casually while thinking, and there was no website doing it in Portuguese at the time, so I made this and it worked and it had a couple hundred daily visitors, and it even generated money from Google Ads (not much)!
First it was a Python web app running on Heroku, then Heroku started charging or limiting the amount of free time I could have on their platform, so I migrated it to a static site that ran everything on the client. Since I didn't want to waste my Python code that actually generated the tables I used Brython to run Python on JavaScript, which was an interesting experience.
In hindsight I could have just taken one of the many
round-robin
JavaScript libraries that exist on NPM, so eventually after a couple of more years I did that.I also removed Google Ads when Google decided it had so many requirements to send me the money it was impossible, and then the money started to vanished.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Classless Templates
There are way too many hours being wasted in making themes for blogs. And then comes a new blog framework, it requires new themes. Old themes can't be used because they relied on different ways of rendering the website. Everything is a mess.
Classless was an attempt at solving it. It probably didn't work because I wasn't the best person to make themes and showcase the thing.
Basically everybody would agree on a simple HTML template that could fit blogs and simple websites very easily. Then other people would make pure-CSS themes expecting that template to be in place.
No classes were needed, only a fixed structure of
header
.main
,article
etc.With flexbox and grid CSS was enough to make this happen.
The templates that were available were all ported by me from other templates I saw on the web, and there was a simple one I created for my old website.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A biblioteca infinita
Agora esqueci o nome do conto de Jorge Luis Borges em que a tal biblioteca é descrita, ou seus detalhes específicos. Eu tinha lido o conto e nunca havia percebido que ele matava a questão da aleatoriedade ser capaz de produzir coisas valiosas. Precisei mesmo da Wikipédia me dizer isso.
Alguns anos atrás levantei essa questão para um grupo de amigos sem saber que era uma questão tão batida e baixa. No meu exemplo era um cachorro andando sobre letras desenhadas e não um macaco numa máquina de escrever. A minha conclusão da discussão foi que não importa o que o cachorro escrevesse, sem uma inteligência capaz de compreender aquilo nada passaria de letras aleatórias.
Borges resolve tudo imaginando uma biblioteca que contém tudo o que o cachorro havia escrito durante todo o infinito em que fez o experimento, e portanto contém todo o conhecimento sobre tudo e todas as obras literárias possíveis -- mas entre cada página ou frase muito boa ou pelo menos legívei há toneladas de livros completamente aleatórios e uma pessoa pode passar a vida dentro dessa biblioteca que contém tanto conhecimento importante e mesmo assim não aprender nada porque nunca vai achar os livros certos.
Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.
Tenho a impressão de que a publicação gigantesca de artigos, posts, livros e tudo o mais está transformando o mundo nessa biblioteca. Há tanta coisa pra ler que é difícil achar o que presta. As pessoas precisam parar de escrever.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Money Supply Measurement
What if we measured money supply measured by probability of being spent -- or how near it is to the point in which it is spent? bonds could be money if they're treated as that by their owners, but they are likely to be not near the spendpoint as cash, other assets can also be considered money but they might be even farther.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
See also
- Bitcoin, the only non-shitcoin
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How being "flexible" can bloat a protocol
(A somewhat absurd example, but you'll get the idea)
Iimagine some client decides to add support for a variant of nip05 that checks for values at /.well-known/nostr.yaml besides /.well-known/nostr.json. "Why not?", they think, "I like YAML more than JSON, this can't hurt anyone".
Then some user makes a nip05 file in YAML and it will work on that client, they will think their file is good since it works on that client. When the user sees that other clients are not recognizing their YAML file, they will complain to the other client developers: "Hey, your client is broken, it is not supporting my YAML file!".
The developer of the other client, astonished, replies: "Oh, I am sorry, I didn't know that was part of the nip05 spec!"
The user, thinking it is doing a good thing, replies: "I don't know, but it works on this other client here, see?"
Now the other client adds support. The cycle repeats now with more users making YAML files, more and more clients adding YAML support, for fear of providing a client that is incomplete or provides bad user experience.
The end result of this is that now nip05 extra-officially requires support for both JSON and YAML files. Every client must now check for /.well-known/nostr.yaml too besides just /.well-known/nostr.json, because a user's key could be in either of these. A lot of work was wasted for nothing. And now, going forward, any new clients will require the double of work than before to implement.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Token-Curated Registries
So you want to build a TCR?
TCRs (Token Curated Registries) are a construct for maintaining registries on Ethereum. Imagine you have lots of scissor brands and you want a list with only the good scissors. You want to make sure only the good scissors make into that list and not the bad scissors. For that, people will tell you, you can just create a TCR of the best scissors!
It works like this: some people have the token, let's call it Scissor Token. Some other person, let's say it's a scissor manufacturer, wants to put his scissor on the list, this guy must acquire some Scissor Tokens and "stake" it. Holders of the Scissor Tokens are allowed to vote on "yes" or "no". If "no", the manufactures loses his tokens to the holders, if "yes" then its tokens are kept in deposit, but his scissor brand gets accepted into the registry.
Such a simple process, they say, have strong incentives for being the best possible way of curating a registry of scissors: consumers have the incentive to consult the list because of its high quality; manufacturers have the incentive to buy tokens and apply to join the list because the list is so well-curated and consumers always consult it; token holders want the registry to accept good and reject bad scissors because that good decisions will make the list good for consumers and thus their tokens more valuable, bad decisions will do the contrary. It doesn't make sense, to reject everybody just to grab their tokens, because that would create an incentive against people trying to enter the list.
Amazing! How come such a simple system of voting has such enourmous features? Now we can have lists of everything so well-curated, and for that we just need Ethereum tokens!
Now let's imagine a different proposal, of my own creation: SPCR, Single-person curated registries.
Single-person Curated Registries are equal to TCR, except they don't use Ethereum tokens, it's just a list in a text file kept by a single person. People can apply to join, and they will have to give the single person some amount of money, the single person can reject or accept the proposal and so on.
Now let's look at the incentives of SPCR: people will want to consult the registry because it is so well curated; vendors will want to enter the registry because people are consulting it; the single person will want to accept the good and reject the bad applicants because these good decisions are what will make the list valuable.
Amazing! How such a single proposal has such enourmous features! SPCR are going to take over the internet!
What TCR enthusiasts get wrong?
TCR people think they can just list a set of incentives for something to work and assume that something will work. Mix that with Ethereum hype and they think theyve found something unique and revolutionary, while in fact they're just making a poor implementation of "democracy" systems that fail almost everywhere.
The life is not about listing a set of "incentives" and then considering the problems solved. Almost everybody on the Earth has the incentive for being rich: being rich has a lot of advantages over being poor, however not all people get rich! Why are the incentives failing?
Curating lists is a hard problem, it involves a lot of knowledge about the problem that just holding a token won't give you, it involves personal preferences, politics, it involves knowing where is the real limit between "good" and "bad". The Single Person list may have a good result if the single person doing the curation is knowledgeable and honest (yes, you can game the system to accept your uncle's scissors and not their competitor that is much better, for example, without losing the entire list reputation), same thing for TCRs, but it can also fail miserably, and it can appear to be good but be in fact not so good. In all cases, the list entries will reflect the preferences of people choosing and other things that aren't taken into the incentives equation of TCR enthusiasts.
We don't need lists
The most important point to be made, although unrelated to the incentive story, is that we don't need lists. Imagine you're looking for a scissor. You don't want someone to tell if scissor A or B are "good" or "bad", or if A is "better" than B. You want to know if, for your specific situation, or for a class of situations, A will serve well, and do that considering A's price and if A is being sold near you and all that.
Scissors are the worst example ever to make this point, but I hope you get it. If you don't, try imagining the same example with schools, doctors, plumbers, food, whatever.
Recommendation systems are badly needed in our world, and TCRs don't solve these at all.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Personagens de jogos e símbolos
A sensação de "ser" um personagem em um jogo ou uma brincadeira talvez seja o mais próximo que eu tenha conseguido chegar do entendimento de um símbolo religioso.
A hóstia consagrada é, segundo a religião, o corpo de Cristo, mas nossa mente moderna só consegue concebê-la como sendo uma representação do corpo de Cristo. Da mesma forma outras culturas e outras religiões têm símbolos parecidos, inclusive nos quais o próprio participante do ritual faz o papel de um deus ou de qualquer coisa parecida.
"Faz o papel" é de novo a interpretação da mente moderna. O sujeito ali é a coisa, mas ele ao mesmo tempo que é também sabe que não é, que continua sendo ele mesmo.
Nos jogos de videogame e brincadeiras infantis em que se encarna um personagem o jogador é o personagem. não se diz, entre os jogadores, que alguém está "encenando", mas que ele é e pronto. nem há outra denominação ou outro verbo. No máximo "encarnando", mas já aí já é vocabulário jornalístico feito para facilitar a compreensão de quem está de fora do jogo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Why IPFS cannot work, again
Imagine someone comes up with a solution for P2P content-addressed data-sharing that involves storing all the files' contents in all computers of the network. That wouldn't work, right? Too much data, if you think this can work then you're a BSV enthusiast.
Then someone comes up with the idea of not storing everything in all computers, but only some things on some computers, based on some algorithm to determine what data a node would store given its pubkey or something like that. Still wouldn't work, right? Still too much data no matter how much you spread it, but mostly incentives not aligned, would implode in the first day.
Now imagine someone says they will do the same thing, but instead of storing the full contents each node would only store a pointer to where each data is actually available. Does that make it better? Hardly so. Still, you're just moving the problem.
This is IPFS.
Now you have less data on each computer, but on a global scale that is still a lot of data.
No incentives.
And now you have the problem of finding the data. First if you have some data you want the world to access you have to broadcast information about that, flooding the network -- and everybody has to keep doing this continuously for every single file (or shard of file) that is available.
And then whenever someone wants some data they must find the people who know about that, which means they will flood the network with requests that get passed from peer to peer until they get to the correct peer.
The more you force each peer to store the worse it becomes to run a node and to store data on behalf of others -- but the less your force each peer to store the more flooding you'll have on the global network, and the slower will be for anyone to actually get any file.
But if everybody just saves everything to Infura or Cloudflare then it works, magic decentralized technology.
Related
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Who will build the roads?
Who will build the roads? Em Lagoa Santa, as mais novas e melhores ruas -- que na verdade acabam por formar enormes teias de bairros que se interligam -- são construídas pelos loteadores que querem as ruas para que seus lotes valham mais -- e querem que outras pessoas usem as ruas também. Também são esses mesmos loteadores que colocam os postes de luz e os encanamentos de água, não sem antes terem que se submeter a extorsões de praxe praticadas por COPASA e CEMIG.
Se ao abrir um loteamento, condomínio, prédio um indivíduo ou uma empresa consegue sem muito problema passar rua, eletricidade, água e esgoto, por que não seria possível existir livre-concorrência nesses mercados? Mesmo aquela velha estória de que é ineficiente passar cabos de luz duplicados para que companhias elétricas possam competir já me parece bobagem.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Per-paragraph paywalls
Using the lnurl-allowance protocol, a website could instead of putting a paywall over the entire site, charge a reader for only the paragraphs they read. Of course this requires trust from the reader on the website, but this is normal. The website could just hide the rest of the article before an invoice from the paragraph just read was paid.
This idea came from Colin from the Unhashed Podcast.
Could also work with podcasts and videos.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How IPFS is broken
I once fell for this talk about "content-addressing". It sounds very nice. You know a certain file exists, you know there are probably people who have it, but you don't know where or if it is hosted on a domain somewhere. With content-addressing you can just say "start" and the download will start. You don't have to care.
Other magic properties that address common frustrations: webpages don't go offline, links don't break, valuable content always finds its way, other people will distribute your website for you, any content can be transmitted easily to people near you without anyone having to rely on third-party centralized servers.
But you know what? Saying a thing is good doesn't automatically make it possible and working. For example: saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.
And what is the solution for that? A DHT!
DHT?
Turns out DHTs have terrible incentive structure (as you would expect, no one wants to hold and serve data they don't care about to others for free) and the IPFS experience proves it doesn't work even in a small network like the IPFS of today.
If you have run an IPFS client you'll notice how much it clogs your computer. Or maybe you don't, if you are very rich and have a really powerful computer, but still, it's not something suitable to be run on the entire world, and on web pages, and servers, and mobile devices. I imagine there may be a lot of unoptimized code and technical debt responsible for these and other problems, but the DHT is certainly the biggest part of it. IPFS can open up to 1000 connections by default and suck up all your bandwidth -- and that's just for exchanging keys with other DHT peers.
Even if you're in the "client" mode and limit your connections you'll still get overwhelmed by connections that do stuff I don't understand -- and it makes no sense to run an IPFS node as a client, that defeats the entire purpose of making every person host files they have and content-addressability in general, centralizes the network and brings back the dichotomy client/server that IPFS was created to replace.
Connections?
So, DHTs are a fatal flaw for a network that plans to be big and interplanetary. But that's not the only problem.
Finding content on IPFS is the most slow experience ever and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with
scp
-- that's considering that IPFS managed to find the other machine, otherwise your command will just be stuck for days.Now even if you ignore that IPFS objects should be content-addressable and not location-addressable and, knowing which peer has the content you want, you go there and explicitly tell IPFS to connect to the peer directly, maybe you can get some seconds of (slow) download, but then IPFS will drop the connection and the download will stop. Sometimes -- but not always -- it helps to add the peer address to your bootstrap nodes list (but notice this isn't something you should be doing at all).
IPFS Apps?
Now consider the kind of marketing IPFS does: it tells people to build "apps" on IPFS. It sponsors "databases" on top of IPFS. It basically advertises itself as a place where developers can just connect their apps to and all users will automatically be connected to each other, data will be saved somewhere between them all and immediately available, everything will work in a peer-to-peer manner.
Except it doesn't work that way at all. "libp2p", the IPFS library for connecting people, is broken and is rewritten every 6 months, but they keep their beautiful landing pages that say everything works magically and you can just plug it in. I'm not saying they should have everything perfect, but at least they should be honest about what they truly have in place.
It's impossible to connect to other people, after years there's no js-ipfs and go-ipfs interoperability (and yet they advertise there will be python-ipfs, haskell-ipfs, whoknowswhat-ipfs), connections get dropped and many other problems.
So basically all IPFS "apps" out there are just apps that want to connect two peers but can't do it manually because browsers and the IPv4/NAT network don't provide easy ways to do it and WebRTC is hard and requires servers. They have nothing to do with "content-addressing" anything, they are not trying to build "a forest of merkle trees" nor to distribute or archive content so it can be accessed by all. I don't understand why IPFS has changed its core message to this "full-stack p2p network" thing instead of the basic content-addressable idea.
IPNS?
And what about the database stuff? How can you "content-address" a database with values that are supposed to change? Their approach is to just save all values, past and present, and then use new DHT entries to communicate what are the newest value. This is the IPNS thing.
Apparently just after coming up with the idea of content-addressability IPFS folks realized this would never be able to replace the normal internet as no one would even know what kinds of content existed or when some content was updated -- and they didn't want to coexist with the normal internet, they wanted to replace it all because this message is more bold and gets more funding, maybe?
So they invented IPNS, the name system that introduces location-addressability back into the system that was supposed to be only content-addressable.
And how do they manage to do it? Again, DHTs. And does it work? Not really. It's limited, slow, much slower than normal content-addressing fetches, most of the times it doesn't even work after hours. But still although developers will tell it is not working yet the IPFS marketing will talk about it as if it was a thing.
Archiving content?
The main use case I had for IPFS was to store content that I personally cared about and that other people might care too, like old articles from dead websites, and videos, sometimes entire websites before they're taken down.
So I did that. Over many months I've archived stuff on IPFS. The IPFS API and CLI don't make it easy to track where stuff are. The
pin
command doesn't help as it just throws your pinned hash in a sea of hashes and subhashes and you're never able to find again what you have pinned.The IPFS daemon has a fake filesystem that is half-baked in functionality but allows you to locally address things by names in a tree structure. Very hard to update or add new things to it, but still doable. It allows you to give names to hashes, basically. I even began to write a wrapper for it, but suddenly after many weeks of careful content curation and distribution all my entries in the fake filesystem were gone.
Despite not having lost any of the files I did lose everything, as I couldn't find them in the sea of hashes I had in my own computer. After some digging and help from IPFS developers I managed to recover a part of it, but it involved hacks. My things vanished because of a bug at the fake filesystem. The bug was fixed, but soon after I experienced a similar (new) bug. After that I even tried to build a service for hash archival and discovery, but as all the problems listed above began to pile up I eventually gave up. There were also problems of content canonicalization, the code the IPFS daemon use to serve default HTML content over HTTP, problems with the IPFS browser extension and others.
Future-proof?
One of the core advertised features of IPFS was that it made content future-proof. I'm not sure they used this expression, but basically you have content, you hash that, you get an address that never expires for that content, now everybody can refer to the same thing by the same name. Actually, it's better: content is split and hashed in a merkle-tree, so there's fine-grained deduplication, people can store only chunks of files and when a file is to be downloaded lots of people can serve it at the same time, like torrents.
But then come the protocol upgrades. IPFS has used different kinds of hashing algorithms, different ways to format the hashes, and will change the default algorithm for building the merkle-trees, so basically the same content now has a gigantic number of possible names/addresses, which defeats the entire purpose, and yes, files hashed using different strategies aren't automagically compatible.
Actually, the merkle algorithm could have been changed by each person on a file-by-file basis since the beginning (you could for example split a book file by chapter or page instead of by chunks of bytes) -- although probably no one ever did that. I know it's not easy to come up with the perfect hashing strategy in the first go, but the way these matters are being approached make me wonder that IPFS promoters aren't really worried about future-proof, or maybe we're just in Beta phase forever.
Ethereum?
This is also a big problem. IPFS is built by Ethereum enthusiasts. I can't read the mind of people behind IPFS, but I would imagine they have a poor understanding of incentives like the Ethereum people, and they tend towards scammer-like behavior like getting a ton of funds for investors in exchange for promises they don't know they can fulfill (like Filecoin and IPFS itself) based on half-truths, changing stuff in the middle of the road because some top-managers decided they wanted to change (move fast and break things) and squatting fancy names like "distributed web".
The way they market IPFS (which is not the main thing IPFS was initially designed to do) as a "peer-to-peer cloud" is very seductive for Ethereum developers just like Ethereum itself is: as a place somewhere that will run your code for you so you don't have to host a server or have any responsibility, and then Infura will serve the content to everybody. In the same vein, Infura is also hosting and serving IPFS content for Ethereum developers these days for free. Ironically, just like the Ethereum hoax peer-to-peer money, IPFS peer-to-peer network may begin to work better for end users as things get more and more centralized.
More about IPFS problems:
- IPFS problems: Too much immutability
- IPFS problems: General confusion
- IPFS problems: Shitcoinery
- IPFS problems: Community
- IPFS problems: Pinning
- IPFS problems: Conceit
- IPFS problems: Inefficiency
- IPFS problems: Dynamic links
See also
- A crappy course on torrents, on the protocol that has done most things right
- The Tragedy of IPFS in a series of links, an ongoing Twitter thread.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte
Ao descer de seu ônibus na rodoviária de Belo Horizonte às 4 e pouco da manhã, darás de frente para um caubói que toma cerveja em seus trajes típicos em um bar no setor mesmo de desembarque. Suba a escada à direita que dá no estacionamento da rodoviária. Vire à esquerda e caminhe por mais ou menos 400 metros, atravessando uma área onde pessoas suspeitas -- mas provavelmente dormindo em pé -- lhe observam, e então uma pracinha ocupada por um clã de mendigos. Ao avistar um enorme obelisco no meio de um cruzamento de duas avenidas, vire à esquerda e caminhe por mais 400 metros. Você verá uma enorme, antiga e bela estação com uma praça em frente, com belas fontes aqüáticas. Corra dali e dirija-se a um pedaço de rua à direita dessa praça. Um velho palco de antigos carnavais estará colocado mais ou menos no meio da simpática ruazinha de parelepípedos: é onde você pegará seu próximo ônibus.
Para entrar na estação é necessário ter um cartão com créditos recarregáveis. Um viajante prudente deixa sempre um pouco de créditos em seu cartão a fim de evitar filas e outros problemas de indisponibilidade quando chega cansado de viagem, com pressa ou em horários incomuns. Esse tipo de pessoa perceberá que foi totalmente ludibriado ao perceber que que os créditos do seu cartão, abastecido quando de sua última vinda a Belo Horizonte, há três meses, pereceram de prazo de validade e foram absorvidos pelos cofre públicos. Terá, portanto, que comprar mais créditos. O guichê onde os cartões são abastecidos abre às 5h, mas não se espante caso ele não tenha sido aberto ainda quando o primeiro ônibus chegar, às 5h10.
Com alguma sorte, um jovem de moletom, autorizado por dois ou três fiscais do sistema de ônibus que conversam alegremente, será o operador da catraca. Ele deixa entrar sem pagar os bêbados, os malandros, os pivetes. Bastante empático e perceptivo do desespero dos outros, esse bom rapaz provavelmente também lhe deixará entrar sem pagar.
Uma vez dentro do ônibus, não se intimide com os gritalhões e valentões que, ofendidíssimos com o motorista por ele ter parado nas estações, depois dos ônibus anteriores terem ignorado esses excelsos passageiros que nelas aguardavam, vão aos berros tirar satisfação.
O ponto final do ônibus, 40 minutos depois, é o terminal Morro Alto. Lá você verá, se procurar bem entre vários ônibus e pessoas que despertam a sua mais honesta suspeita, um veículo escuro, apagado, numerado 5882 e que abrigará em seu interior um motorista e um cobrador que descansam o sono dos justos.
Aguarde na porta por mais uns vinte minutos até que, repentinamente desperto, o motorista ligue o ônibus, abra as portas e já comece, de leve, a arrancar. Entre correndo, mas espere mais um tempo, enquanto as pessoas que têm o cartão carregado passem e peguem os melhores lugares, até que o cobrador acorde e resolva te cobrar a passagem nesse velho meio de pagamento, outrora o mais líqüído, o dinheiro.
Este último ônibus deverá levar-lhe, enfim, a Lagoa Santa.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28WelcomeBot
The first bot ever created for Trello.
It invited to a public board automatically anyone who commented on a card he was added to.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Sol e Terra
A Terra não gira em torno do Sol. Tudo depende do ponto de referência e não existe um ponto de referência absoluto. Só é melhor dizer que a Terra gira em torno do Sol porque há outros planetas fazendo movimentos análogos e aí fica mais fácil para todo mundo entender os movimentos tomando o Sol como ponto de referência.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Jofer
Jofer era um jogador diferente. À primeira vista não, parecia igual, um volante combativo, perseguia os atacantes adversários implacavelmente, um bom jogador. Mas não era essa a característica que diferenciava Jofer. Jofer era, digamos, um chutador.
Começou numa semifinal de um torneio de juniores. O time de Jofer precisava do empate e estava sofrendo uma baita pressão do adversário, mas o jogo estava 1 a 1 e parecia que ia ficar assim mesmo, daquele jeito futebolístico que parece, parece mesmo. Só que aos 46 do segundo tempo tomaram um gol espírita, Ruizinho do outro time saiu correndo pela esquerda e, mesmo sendo canhoto, foi cortando para o meio, os zagueiros meio que achando que já tinha acabado mesmo, devia ter só mais aquele lance, o árbitro tinha dado dois minutos, Ruizinho chutou, marcou e o goleiro, que só pulou depois que já tinha visto que não ia ter jeito, ficou xingando.
A bola saiu do meio e tocaram para Jofer, ninguém nem veio marcá-lo, o outro time já estava comemorando, e com razão, o juiz estava de sacanagem em fazer o jogo continuar, já estava tudo acabado mesmo. Mas não, estava certo, mais um minuto de acréscimo, justo. Em um minuto dá pra fazer um gol. Mas como? Jofer pensou nas partidas da NBA em que com alguns centésimos de segundo faltando o armador jogava de qualquer jeito para a cesta e às vezes acertava. De trás do meio de campo, será? Não vou ter nem força pra fazer chegar no gol. Vou virar piada, melhor tocar pro Fumaça ali do lado e a gente perde sem essa humilhação no final. Mas, poxa, e daí? Vou tentar mesmo assim, qualquer coisa eu falo que foi um lançamento e daqui a uns dias todo mundo esquece. Olhou para o próprio pé, virou ele de ladinho, pra fora e depois pra dentro (bom, se eu pegar daqui, direitinho, quem sabe?), jogou a bola pro lado e bateu. A bola subiu escandalosamente, muito alta mesmo, deve ter subido uns 200 metros. Jofer não tinha como ter a menor noção. Depois foi descendo, o goleirão voltando correndo para debaixo da trave e olhando pra bola, foi chegando e pulando já só pra acompanhar, para ver, dependurado no travessão, a bola sair ainda bem alta, ela bateu na rede lateral interna antes de bater no chão, quicar violentamente e estufar a rede no alto do lado direito de quem olhava.
Mas isso tudo foi sonho do Jofer. Sonhou acordado, numa noite em que demorou pra dormir, deitado na sua cama. Ficou pensando se não seria fácil, se ele treinasse bastante, acertar o gol bem de longe, tipo no sonho, e se não dava pra fazer gol assim. No dia seguinte perguntou a Brunildinho, o treinador de goleiros. Era difícil defender essas bolas, ainda mais se elas subissem muito, o goleiro ficava sem perspectiva, o vento alterava a trajetória a cada instante, tinha efeito, ela cairia rápido, mas claro que não valia à pena treinar isso, a chance de acertar o gol era minúscula. Mas Jofer só ia tentar depois que treinasse bastante e comprovasse o que na sua imaginação parecia uma excelente idéia.
Começou a treinar todos os dias. Primeiro escondido, por vergonha dos colegas, chegava um pouco antes e ficava lá, chutando do círculo central. Ao menor sinal de gente se aproximando, parava e ia catar as bolas. Depois, quando começou a acertar, perdeu a vergonha. O pessoal do clube todo achava engraçado quando via Jofer treinando e depois ouvia a explicação da boca de alguém, ninguém levava muito a sério, mas também não achava de todo ridículo. O pessoal ria, mas no fundo torcia praquilo dar certo, mesmo.
Aconteceu que num jogo que não valia muita coisa, empatezinho feio, aos 40 do segundo tempo, a marcação dos adversários já não estava mais pressionando, todo mundo contente com o empate e com vontade de parar de jogar já, o Henrique, meia-esquerdo, humilde, mas ainda assim um pouco intimidante para Jofer (jogava demais), tocou pra ele. Vai lá, tenta sua loucura aí. Assumiu a responsabilidade do nosso volante introspectivo. Seria mais verossímil se Jofer tivesse errado, primeira vez que tentou, restava muito tempo ainda pra ele ter a chance de ser herói, ninguém acerta de primeira, mas ele acertou. Quase como no sonho, Lucas, o goleiro, não esperava, depois que viu o lance, riu-se, adiantou-se para pegar a bola que ele julgava que quicaria na área, mas ela foi mais pra frente, mais e mais, daí Lucas já estava correndo, só que começou a pensar que ela ia pra fora, e ele ia só se dependurar no travessão e fazer seu papel de estar na bola. Acabou que por conta daquele gol eles terminaram em segundo no grupo daquele torneiozinho, ao invés de terceiro, e não fez diferença nenhuma.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Parallel Chains
We want merged-mined blockchains. We want them because it is possible to do things in them that aren't doable in the normal Bitcoin blockchain because it is rightfully too expensive, but there are other things beside the world money that could benefit from a "distributed ledger" -- just like people believed in 2013 --, like issued assets and domain names (just the most obvious examples).
On the other hand we can't have -- like people believed in 2013 -- a copy of Bitcoin for every little idea with its own native token that is mined by proof-of-work and must get off the ground from being completely valueless into having some value by way of a miracle that operated only once with Bitcoin.
It's also not a good idea to have blockchains with custom merged-mining protocol (like Namecoin and Rootstock) that require Bitcoin miners to run their software and be an active participant and miner for that other network besides Bitcoin, because it's too cumbersome for everybody.
Luckily Ruben Somsen invented this protocol for blind merged-mining that solves the issue above. Although it doesn't solve the fact that each parallel chain still needs some form of "native" token to pay miners -- or it must use another method that doesn't use a native token, such as trusted payments outside the chain.
How does it work
With the
SIGHASH_NOINPUT
/SIGHASH_ANYPREVOUT
soft-fork[^eltoo] it becomes possible to create presigned transactions that aren't related to any previous UTXO.Then you create a long sequence of transactions (sufficient to last for many many years), each with an
nLockTime
of 1 and each spending the next (you create them from the last to the first). Since theirscriptSig
(the unlocking script) will useSIGHASH_ANYPREVOUT
you can obtain a transaction id/hash that doesn't include the previous TXO, you can, for example, in a sequence of transactionsA0-->B
(B spends output 0 from A), include the signature for "spending A0 on B" inside thescriptPubKey
(the locking script) of "A0".With the contraption described above it is possible to make that long string of transactions everybody will know (and know how to generate) but each transaction can only be spent by the next previously decided transaction, no matter what anyone does, and there always must be at least one block of difference between them.
Then you combine it with
RBF
,SIGHASH_SINGLE
andSIGHASH_ANYONECANPAY
so parallel chain miners can add inputs and outputs to be able to compete on fees by including their own outputs and getting change back while at the same time writing a hash of the parallel block in the change output and you get everything working perfectly: everybody trying to spend the same output from the long string, each with a different parallel block hash, only the highest bidder will get the transaction included on the Bitcoin chain and thus only one parallel block will be mined.See also
[^eltoo]: The same thing used in Eltoo.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Zettelkasten
https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125 (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal
neuron
que também gera sites a partir das notas?, acho que vai ser bom. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28hledger-web
A Haskell app that uses Miso and hledger's Haskell libraries plus ghcjs to be compiled to a web page, and then adds optional remoteStorage so you can store your ledger data somewhere else.
This was my introduction to Haskell and also built at a time I thought remoteStorage was a good idea that solved many problems, and that it could use some help in the form of just yet another somewhat-useless-but-cool project using it that could be added to their wiki.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A list of things artificial intelligence is not doing
If AI is so good why can't it:
- write good glue code that wraps a documented HTTP API?
- make good translations using available books and respective published translations?
- extract meaningful and relevant numbers from news articles?
- write mathematical models that fit perfectly to available data better than any human?
- play videogames without cheating (i.e. simulating human vision, attention and click speed)?
- turn pure HTML pages into pretty designs by generating CSS
- predict the weather
- calculate building foundations
- determine stock values of companies from publicly available numbers
- smartly and automatically test software to uncover bugs before releases
- predict sports matches from the ball and the players' movement on the screen
- continuously improve niche/local search indexes based on user input and and reaction to results
- control traffic lights
- predict sports matches from news articles, and teams and players' history
This was posted first on Twitter.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28OP_CHECKTEMPLATEVERIFY
and the "covenants" dramaThere are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Rede Relâmpago
Ao se referir à Lightning Network do O que é Bitcoin?, nós, brasileiros e portugueses, devemos usar o termo "Relâmpago" ou "Rede Relâmpago". "Relâmpago" é uma palavra bonita e apropriada, e fácil de pronunciar por todos os nossos compatriotas. Chega de anglicismos desnecessários.
Exemplo de uma conversa hipotética no Brasil usando esta nomenclatura:
– Posso pagar com Relâmpago? – Opa, claro! Vou gerar um boleto aqui pra você.
Repare que é bem mais natural e fácil do que a outra alternativa:
– Posso pagar com láitenim? – Leite ninho?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28lnurl-auth explained
You may have seen the lnurl-auth spec or heard about it, but might not know how it works or what is its relationship with other lnurl protocols. This document attempts to solve that.
Relationship between lnurl-auth and other lnurl protocols
First, what is the relationship of lnurl-auth with other lnurl protocols? The answer is none, except the fact that they all share the lnurl format for specifying
https
URLs.In fact, lnurl-auth is very unique in the sense that it doesn't even need a Lightning wallet to work, it is a standalone authentication protocol that can work anywhere.
How does it work
Now, how does it work? The basic idea is that each wallet has a seed, which is a random value (you may think of the BIP39 seed words, for example). Usually from that seed different keys are derived, each of these yielding a Bitcoin address, and also from that same seed may come the keys used to generate and manage Lightning channels.
What lnurl-auth does is to generate a new key from that seed, and from that a new key for each service (identified by its domain) you try to authenticate with.
That way, you effectively have a new identity for each website. Two different services cannot associate your identities.
The flow goes like this: When you visit a website, the website presents you with a QR code containing a callback URL and a challenge. The challenge should be a random value.
When your wallet scans or opens that QR code it uses the domain in the callback URL plus the main lnurl-auth key to derive a key specific for that website, uses that key to sign the challenge and then sends both the public key specific for that for that website plus the signed challenge to the specified URL.
When the service receives the public key it checks it against the challenge signature and start a session for that user. The user is then identified only by its public key. If the service wants it can, of course, request more details from the user, associate it with an internal id or username, it is free to do anything. lnurl-auth's goals end here: no passwords, maximum possible privacy.
FAQ
-
What is the advantage of tying this to Bitcoin and Lightning?
One big advantage is that your wallet is already keeping track of one seed, it is already a precious thing. If you had to keep track of a separate auth seed it would be arguably worse, more difficult to bootstrap the protocol, and arguably one of the reasons similar protocols, past and present, weren't successful.
-
Just signing in to websites? What else is this good for?
No, it can be used for authenticating to installable apps and physical places, as long as there is a service running an HTTP server somewhere to read the signature sent from the wallet. But yes, signing in to websites is the main problem to solve here.
-
Phishing attack! Can a malicious website proxy the QR from a third website and show it to the user to it will steal the signature and be able to login on the third website?
No, because the wallet will only talk to the the callback URL, and it will either be controlled by the third website, so the malicious won't see anything; or it will have a different domain, so the wallet will derive a different key and frustrate the malicious website's plan.
-
I heard SQRL had that same idea and it went nowhere.
Indeed. SQRL in its first version was basically the same thing as lnurl-auth, with one big difference: it was vulnerable to phishing attacks (see above). That was basically the only criticism it got everywhere, so the protocol creators decided to solve that by introducing complexity to the protocol. While they were at it they decided to add more complexity for managing accounts and so many more crap that in the the spec which initially was a single page ended up becoming 136 pages of highly technical gibberish. Then all the initial network effect it had, libraries and apps were trashed and nowadays no one can do anything with it (but, see, there are still people who love the protocol writing in a 90's forum with no clue of anything besides their own Java).
-
We don't need this, we need WebAuthn!
WebAuthn is essentially the same thing as lnurl-auth, but instead of being simple it is complex, instead of being open and decentralized it is centralized in big corporations, and instead of relying on a key generated by your own device it requires an expensive hardware HSM you must buy and trust the manufacturer. If you like WebAuthn and you like Bitcoin you should like lnurl-auth much more.
-
What about BitID?
This is another one that is very similar to lnurl-auth, but without the anti-phishing prevention and extra privacy given by making one different key for each service.
-
What about LSAT?
It doesn't compete with lnurl-auth. LSAT, as far as I understand it, is for when you're buying individual resources from a server, not authenticating as a user. Of course, LSAT can be repurposed as a general authentication tool, but then it will lack features that lnurl-auth has, like the property of having keys generated independently by the user from a common seed and a standard way of passing authentication info from one medium to another (like signing in to a website at the desktop from the mobile phone, for example).
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reclamações
- Como não houve resposta, estou enviando de novo
- Democracia na América
- A "política" é a arena da vitória do estatismo
- A biblioteca infinita
- Família e propriedade
- Memórias de quando eu aprendi a ler
- A chatura Kelsen
- O VAR é o grande equalizador
- Não tem solução
- A estrutura lógica do livro didático
- "House" dos economistas e o Estado
- Revista Educativa
- Cultura Inglesa e aprendizado extra-escolar
- Veterano não é dono de bixete
- Personagens de jogos e símbolos
- Músicas grudentas e conversas
- Obra aqui do lado
- Propaganda
- Ver Jesus com os olhos da carne
- Processos Antifrágeis
- Cadeias, crimes e cidadãos de bem
- Castas hindus em nova chave
- Método científico
- Xampu
- Thafne venceu o Soletrando 2008.
- Empreendendorismo de boteco
- Problemas com Russell Kirk
- Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Criteria for activating Drivechain on Bitcoin
Drivechain is, in essence, just a way to give Bitcoin users the option to deposit their coins in a hashrate escrow. If Bitcoin is about coin ownership, in theory there should be no objection from anyone on users having the option to do that: my keys, my coins etc. In other words: even if you think hashrate escrows are a terrible idea and miners will steal all coins from that, you shouldn't care about what other people do with their own money.
There are only two reasonable objections that could be raised by normal Bitcoin users against Drivechain:
- Drivechain adds code complexity to
bitcoind
- Drivechain perverts miner incentives of the Bitcoin chain
If these two objections can be reasonably answered there remains no reason for not activating the Drivechain soft-fork.
1
To address 1 we can just take a look at the code once it's done (which I haven't) but from my understanding the extra validation steps needed for ensuring hashrate escrows work are very minimal and self-contained, they shouldn't affect anything else and the risks of introducing some catastrophic bug are roughly zero (or the same as the risks of any of the dozens of refactors that happen every week on Bitcoin Core).
For the BMM/BIP-301 part, again the surface is very small, but we arguably do not need that at all, since anyprevout (once that is merged) enables blind merge-mining in way that is probably better than BIP-301, and that soft-fork is also very simple, plus already loved and accepted by most of the Bitcoin community, implemented and reviewed on Bitcoin Inquisition and is live on the official Bitcoin Core signet.
2
To address 2 we must only point that BMM ensures that Bitcoin miners don't have to do any extra work to earn basically all the fees that would come from the sidechain, as competition for mining sidechain blocks would bid the fee paid to Bitcoin miners up to the maximum economical amount. It is irrelevant if there is MEV on the sidechain or not, everything that reaches the Bitcoin chain does that in form of fees paid in a single high-fee transaction paid to any Bitcoin miner, regardless of them knowing about the sidechain or not. Therefore, there are no centralization pressure or pervert mining incentives that can affect Bitcoin land.
Sometimes it's argued that Drivechain may facilitate the ocurrence of a transaction paying a fee so high it would create incentives for reorging the Bitcoin chain. There is no reason to believe Drivechain would make this more likely than an actual attack than anyone can already do today or, as has happened, some rich person typing numbers wrong on his wallet. In fact, if a drivechain is consistently paying high fees on its BMM transactions that is an incentive for Bitcoin miners to keep mining those transactions one after the other and not harm the users of sidechain by reorging Bitcoin.
Moreover, there are many factors that exist today that can be seen as centralization vectors for Bitcoin mining: arguably one of them is non-blind merge mining, of which we have a (very convoluted) example on the Stacks shitcoin, and introducing the possibility of blind merge-mining on Bitcoin would basically remove any reasonable argument for having such schemes, therefore reducing the centralizing factor of them.
- Drivechain adds code complexity to
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The Lightning Network solves the problem of the decentralized commit
Before reading this, see Ripple and the problem of the decentralized commit.
The Bitcoin Lightning Network can be thought as a system similar to Ripple: there are conditional IOUs (HTLCs) that are sent in "prepare"-like messages across a route, and a secret
p
that must travel from the final receiver backwards through the route until it reaches the initial sender and possession of that secret serves to prove the payment as well as to make the IOU hold true.The difference is that if one of the parties don't send the "acknowledge" in time, the other has a trusted third-party with its own clock (that is the clock that is valid for everybody involved) to complain immediately at the timeout: the Bitcoin blockchain. If C has
p
and B isn't acknowleding it, C tells the Bitcoin blockchain and it will force the transfer of the amount from B to C.Differences (or 1 upside and 3 downside)
-
The Lightning Network differs from a "pure" Ripple network in that when we send a "prepare" message on the Lightning Network, unlike on a pure Ripple network we're not just promising we will owe something -- instead we are putting the money on the table already for the other to get if we are not responsive.
-
The feature above removes the trust element from the equation. We can now have relationships with people we don't trust, as the Bitcoin blockchain will serve as an automated escrow for our conditional payments and no one will be harmed. Therefore it is much easier to build networks and route payments if you don't always require trust relationships.
-
However it introduces the cost of the capital. A ton of capital must be made available in channels and locked in HTLCs so payments can be routed. This leads to potential issues like the ones described in https://twitter.com/joostjgr/status/1308414364911841281.
-
Another issue that comes with the necessity of using the Bitcoin blockchain as an arbiter is that it may cost a lot in fees -- much more than the value of the payment that is being disputed -- to enforce it on the blockchain.[^closing-channels-for-nothing]
Solutions
Because the downsides listed above are so real and problematic -- and much more so when attacks from malicious peers are taken into account --, some have argued that the Lightning Network must rely on at least some trust between peers, which partly negate the benefit.
The introduction of purely trust-backend channels is the next step in the reasoning: if we are trusting already, why not make channels that don't touch the blockchain and don't require peers to commit large amounts of capital?
The reason is, again, the ambiguity that comes from the problem of the decentralized commit. Therefore hosted channels can be good when trust is required only from one side, like in the final hops of payments, but they cannot work in the middle of routes without eroding trust relationships between peers (however they can be useful if employed as channels between two nodes ran by the same person).
The next solution is a revamped pure Ripple network, one that solves the problem of the decentralized commit in a different way.
[^closing-channels-for-nothing]: That is even true when, for reasons of the payment being so small that it doesn't even deserve an actual HTLC that can be enforced on the chain (as per the protocol), even then the channel between the two nodes will be closed, only to make it very clear that there was a disagreement. Leaving it online would be harmful as one of the peers could repeat the attack again and again. This is a proof that ambiguity, in case of the pure Ripple network, is a very important issue.
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Comprimido desodorante
No episódio sei-lá-qual de Aleixo FM Bruno Aleixo diz que os bêbados sempre têm as melhores idéias e daí conta uma idéia que ele teve quando estava bêbado: um comprimido que funciona como desodorante. Ao invés de passar o desodorante spray ou roll-on a pessoa pode só tomar o comprimido e pronto, é muito mais prático e no tempo de frio a pessoa pode vestir a roupa mais rápido, sem precisar ficar passando nada com o tronco todo nu. Quando o Busto lhe pergunta sobre a possibilidade de algo assim ser fabricado ele diz que não sabe, que não é cientista, só tem as idéias.
Essa passagem tão boba de um programa de humor esconde uma verdade sobre a doutrina cientística que permeia a sociedade. A doutrina segundo a qual é da ciência que vêm as inovações tecnológicas e de todos os tipos, e por isso é preciso que o Estado tire dinheiro das pessoas trabalhadoras e dê para os cientistas. Nesse ponto ninguém mais sabe o que é um cientista, foi-se toda a concretude, ficou só o nome: "cientista". Daí vão procurar o tal cientista, é um cara que se formou numa universidade e está fazendo um mestrado. Pronto, é só dar dinheiro pra esse cara e tudo vai ficar bom.
Tirando o problema da desconexão entre realidade e a tese, existe também, é claro, o problema da tese: não faz sentido, que um cientista fique procurando formas de realizar uma idéia, que não se sabe nem se é possível nem se é desejável, que ele ou outra pessoa tiveram, muito pelo contrário (mas não vou dizer aqui o que é que era para o cientista fazer porque isso seria contraditório e eu não acho que devam nem existir cientistas).
O que eu queria dizer mesmo era: todo o aparato científico da nossa sociedade, todos os departamentos, universidades, orçamentos e bolsas e revistas, tudo se resume a um monte de gente tentando descobrir como fazer um comprimido desodorante.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Flowi.es
At the time I thought Workflowy had the ideal UI for everything. I wanted to implement my custom app maker on it, but ended up doing this: a platform for enhancing Workflowy with extra features:
- An email reminder based on dates input in items
- A website generator, similar to Websites For Trello, also based on Classless Templates
Also, I didn't remember this was also based on CouchDB and had some couchapp functionalities.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reasons for miners to not steal
See Drivechain for an introduction. Here we'll just have a list of reasons why miners would not steal:
- they will lose future fees from that specific drivechain: you can discount all future fees and condense them into a single present number in order to do some mathematical calculation.
- they may lose future fees from all other Drivechains, if the users assume they will steal from those too.
- Bitcoin will be devalued if they steal, because:
- Bitcoin is worth more if it has Drivechains working, because it is more useful, has more use-cases, more users. Without Drivechains it necessarily has to be worth less.
- Bitcoin has more fee revenue if has Drivechains working, which means it has a bigger chance of surviving going forward and being more censorship-resistant and resistant to State attacks, therefore it has to worth more if Drivechains work and less if they don't.
- Bitcoin is worth more if the public perception is that Bitcoin miners are friendly and doing their work peacefully instead of being a band of revolted peons that are constantly threating to use their 75% hashrate to do evil things such as:
- double-spending attacks;
- censoring of transactions for a certain group of people;
- selfish mining.
- if Bitcoin is devalued its price is bound to fall, meaning that miners will lose on
- their future mining rewards;
- their ASIC investiment;
- the same coins they are trying to steal from the drivechain.
- if a mining pool tries to steal, they will risk losing their individual miners to other pools that don't.
- whenever a steal attempt begins, the coins in the drivechain will lose value (if the steal attempt is credible their price will drop quite substantially), which means that if a coalition of miners really try to steal, there is an incentive for another coalition of miners to buy some devalued coins and then stop the steal.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: a website for feedback exchange
I thought a community of people sharing feedback on mutual interests would be a good thing, so as always I broadened and generalized the idea and mixed with my old criticue-inspired idea-feedback project and turned it into a "token". You give feedback on other people's things, they give you a "point". You can then use that point to request feedback from others.
This could be made as an Etleneum contract so these points were exchanged for satoshis using the shitswap contract (yet to be written).
In this case all the Bitcoin/Lightning side of the website must be hidden until the user has properly gone through the usage flow and earned points.
If it was to be built on Etleneum then it needs to emphasize the login/password login method instead of the lnurl-auth method. And then maybe it could be used to push lnurl-auth to normal people, but with a different name.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On the state of programs and browsers
Basically, there are basically (not exhaustively) 2 kinds of programs one can run in a computer nowadays:
1.1. A program that is installed, permanent, has direct access to the Operating System, can draw whatever it wants, modify files, interact with other programs and so on; 1.2. A program that is transient, fetched from someone else's server at run time, interpreted, rendered and executed by another program that bridges the access of that transient program to the OS and other things.
Meanwhile, web browsers have basically (not exhaustively) two use cases:
2.1. Display text, pictures, videos hosted on someone else's computer; 2.2. Execute incredibly complex programs that are fetched at run time, executed and so on -- you get it, it's the same 1.2.
These two use cases for browsers are at big odds with one another. While stretching itsel f to become more and more a platform for programs that can do basically anything (in the 1.1 sense) they are still restricted to being an 1.2 platform. At the same time, websites that were supposed to be on 2.1 sometimes get confused and start acting as if they were 2.2 -- and other confusing mixed up stuff.
I could go hours in philosophical inquiries on the nature of browsers, how rewriting everything in JavaScript is not healthy or where everything went wrong, but I think other people have done this already.
One thing that bothers me a lot, though, is that computers can do a lot of things, and with the internet and in the current state of the technology it's fairly easy to implement tools that would help in many aspects of human existence and provide high-quality, useful programs, with the help of a server to coordinate access, store data, authenticate users and so on many things are possible. However, due to the nature of UI in the browser, it's very hard to get any useful tool to users.
Writing a UI, even the most basic UI imaginable (some text input boxes and some buttons, or a table) can take a long time, always more than the time necessary to code the actual core features of whatever program is being developed -- and that is considering that the person capable of writing interesting programs that do the functionality in the backend are also capable of interacting with JavaScript and the giant amount of frameworks, transpilers, styling stuff, CSS, the fact that all this is built on top of HTML and so on.
This is not good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Module Linker
A browser extension that reads source code on GitHub and tries to find links to imported dependencies so you can click on them and navigate through either GitHub or package repositories or base language documentation. Works for many languages at different levels of completeness.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28bolt12 problems
- clients can't programatically build new offers by changing a path or query params (services like zbd.gg or lnurl-pay.me won't work)
- impossible to use in a load-balanced custodian way -- since offers would have to be pregenerated and tied to a specific lightning node.
- the existence of fiat currency fields makes it so wallets have to fetch exchange rates from somewhere on the internet (or offer a bad user experience), using HTTP which hurts user privacy.
- the vendor field is misleading, can be phished very easily, not as safe as a domain name.
- onion messages are an improvement over fake HTLC-based payments as a way of transmitting data, for sure. but we must decide if they are (i) suitable for transmitting all kinds of data over the internet, a replacement for tor; or (ii) not something that will scale well or on which we can count on for the future. if there was proper incentivization for data transmission it could end up being (i), the holy grail of p2p communication over the internet, but that is a very hard problem to solve and not guaranteed to yield the desired scalability results. since not even hints of attempting to solve that are being made, it's safer to conclude it is (ii).
bolt12 limitations
- not flexible enough. there are some interesting fields defined in the spec, but who gets to add more fields later if necessary? very unclear.
- services can't return any actionable data to the users who paid for something. it's unclear how business can be conducted without an extra communication channel.
bolt12 illusions
- recurring payments is not really solved, it is just a spec that defines intervals. the actual implementation must still be done by each wallet and service. the recurring payment cannot be enforced, the wallet must still initiate the payment. even if the wallet is evil and is willing to initiate a payment without the user knowing it still needs to have funds, channels, be online, connected etc., so it's not as if the services could rely on the payments being delivered in time.
- people seem to think it will enable pushing payments to mobile wallets, which it does not and cannot.
- there is a confusion of contexts: it looks like offers are superior to lnurl-pay, for example, because they don't require domain names. domain names, though, are common and well-established among internet services and stores, because these services have websites, so this is not really an issue. it is an issue, though, for people that want to receive payments in their homes. for these, indeed, bolt12 offers a superior solution -- but at the same time bolt12 seems to be selling itself as a tool for merchants and service providers when it includes and highlights features as recurring payments and refunds.
- the privacy gains for the receiver that are promoted as being part of bolt12 in fact come from a separate proposal, blinded paths, which should work for all normal lightning payments and indeed are a very nice solution. they are (or at least were, and should be) independent from the bolt12 proposal. a separate proposal, which can be (and already is being) used right now, also improves privacy for the receiver very much anway, it's called trampoline routing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A big Ethereum problem that is fixed by Drivechain
While reading the following paragraphs, assume Drivechain itself will be a "smart contract platform", like Ethereum. And that it won't be used to launch an Ethereum blockchain copy, but instead each different Ethereum contract could be turned into a different sidechain under BIP300 rules.
A big Ethereum problem
Anyone can publish any "contract" to Ethereum. Often people will come up with somewhat interesting ideas and publish them. Since they want money they will add an unnecessary token and use that to bring revenue to themselves, gamify the usage of their contract somehow, and keep some control over the supposedly open protocol they've created by keeping a majority of the tokens. They will use the profits on marketing and branding, have a visual identity, a central website and a forum with support personnel and so on: their somewhat interesting idea have become a full-fledged company.
If they have success then another company will appear in the space and copy the idea, launch it using exactly the same strategy with a tweak, then try to capture the customers of the first company and new people. And then another, and another, and another. Very often these contracts require some network effect to work, i.e., they require people to be using it so others will use it. The fact that the market is now split into multiple companies offering roughly the same product hurts that, such that none of these protocols get ever enough usage to become really useful in the way they were first conceived. At this point it doesn't matter though, they get some usage, and they use that in their marketing material. It becomes a race to pump the value of the tokens and the current usage is just another point used for that purpose. The company will even start giving out money to attract new users and other weird moves that have no relationship with the initial somewhat intereting idea.
Once in a lifetime it happens that the first implementer of these things is not a company seeking profits, but some altruistic developer or company that believes in Ethereum and wants to see it grow -- or more likely someone financed by the Ethereum Foundation, which allegedly doesn't like these token schemes and would prefer everybody to use the token they issued first, the ETH --, but that's a fruitless enterprise because someone else will copy that idea anyway and turn it into a company as described above.
How Drivechain fixes it
In the Drivechain world, if someone had an idea, they would -- as it happens all the time with Bitcoin things -- publish it in a public forum. Other members of the community would evaluate that idea, add or remove things, all interested parties would contribute to make it the best possible incarnation of that idea. Once the design was settled, someone would volunteer to start writing the code to turn that idea into a sidechain. Maybe some company would fund those efforts and then more people would join. It's not a perfect process and one that often involves altruism, but Bitcoin inspires people to do these things.
Slowly, the thing would get built, tested, activated as a sidechain on testnet, tested more, and at this point luckily the entire community of interested Bitcoin users and miners would have grown to like that idea and see its benefits. It could then be proposed to be activated according to BIP300 rules.
Once it was activated, the entire pool of interested users would join it. And it would be impossible for someone else to create a copy of that because everybody would instantly notice it was a copy. There would be no token, no one profiting directly from the operations of that "smart contract". And everybody would be incentivized to join and tell others to join that same sidechain since the network effect was already the biggest there, they will know more network effect would only be good for everybody involved, and there would be no competing marketing and free token giveaways from competing entities.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Músicas novas e conhecidas
Quando for ouvir música de fundo, escolha músicas bem conhecidas. Para ouvir músicas novas, reserve um tempo e ouça-as com total atenção.
Uma coisa similar é dirigir por caminhos conhecidos versus dirigir em lugares novos. a primeira opção te permite fazer coisas enquanto dirige "de fundo", a segunda requer atenção total.
Com músicas, tenho errado constantemente em achar que posso conhecer músicas novas ao mesmo tempo em que me dedico a outras tarefas.
See also:
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28neuron.vim
I started using this neuron thing to create an update this same zettelkasten, but the existing vim plugin had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Inefficiency
Imagine you have two IPFS nodes and unique content, created by you, in the first one. From the second, you can connect to the first and everyhing looks right. You then try to fetch that content. After some seconds it starts coming, the progress bar begins to move, that's slow, very slow, doing an rsync would have been 20 times faster.
The progress bar halts. You investigate, the second node is not connected to the first anymore. Why, if that was the only source for the file we're trying to fetch? It remains a mistery to this day. You reconnect manually, the progress bar moves again, halts, you're disconnected again. Instead of reconnecting you decide to add the second node to the first node's "Bootstrap" list.
I once tried to run an IPFS node on a VPS and store content on S3. There are two S3 datastore plugins available. After fixing some issues in one of them, recompiling go-ipfs, figuring out how to read settings from the IPFS config file, creating an init profile and recompiling again I got the node running. It worked. My idea was to host a bunch of data on that node. Data would be fetched from S3 on demand so there would be cheap and fast access to it from any IPFS node or gateway.
IPFS started doing hundreds of calls to S3 per minute – something I wouldn't have known about if I hadn't inserted some log statements in the plugin code, I mean before the huge AWS bill arrived. Apparently that was part of participation on the DHT. Adjusting some settings turned my node into a listen-only thing as I intended, but I'm not 100% sure it would work as an efficient content provider, and I'll never know, as the memory and CPU usage got too high for my humble VPS and I had to turn it down.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Module Linker
A browser extension that reads source code on GitHub and tries to find links to imported dependencies so you can click on them and navigate through either GitHub or package repositories or base language documentation. Works for many languages at different levels of completeness.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28ijq
An interactive REPL for
jq
with smart helpers (for example, it automatically assigns each line of input to a variable so you can reference it later, it also always referenced the previous line automatically).See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Trelew
A CLI tool for navigating Trello boards. It used vorpal for an "immersive" experience and was pretty good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Using Spacechains and Fedimint to solve scaling
What if instead of trying to create complicated "layer 2" setups involving noveau cryptographic techniques we just did the following:
- we take that Fedimint source code and remove the "mint" stuff, and just use their federation stuff secure coins with multisig;
- then we make a spacechain;
- and we make the federations issue multisig-btc tokens on it;
- and then we put some uniswap-like thing in there to allow these tokens to be exchanged freely.
Why?
The recent spike in fees caused by Ordinals and BRC-20 shitcoinery has shown that Lightning isn't a silver bullet. Channels are too fragile, it costs a lot to open a channel under a high fee environment, to run a routing node and so on.
People who want to keep using Lightning are instead flocking to the big Lightning custodial providers: WalletofSatoshi, ZEBEDEE, OpenNode and so on. We could leverage that trust people have in these companies (and individuals) operating shadow Lightning providers and turn each of these into a btc-token issuer. Each issue their own token, transactions flow freely. Each person can hold only assets from the issuers they trust more.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Just malinvestiment
Traditionally the Austrian Theory of Business Cycles has been explained and reworked in many ways, but the most widely accepted version (or the closest to the Mises or Hayek views) view is that banks (or the central bank) cause the general interest rate to decline by creation of new money and that prompts entrepreneurs to invest in projects of longer duration. This can be confusing because sometimes entrepreneurs embark in very short-time projects during one of these bubbles and still contribute to the overall cycle.
The solution is to think about the "longer term" problem is to think of the entire economy going long-term, not individual entrepreneurs. So if one entrepreneur makes an investiment in a thing that looks simple he may actually, knowingly or not, be inserting himself in a bigger machine that is actually involved in producing longer-term things. Incidentally this thinking also solves the biggest criticism of the Austrian Business Cycle Theory: that of the rational expectations people who say: "oh but can't the entrepreneurs know that the interest rate is artificially low and decide to not make long-term investiments?" ("and if they don't know they should lose money and be replaced like in a normal economy flow blablabla?"). Well, the answer is that they are not really relying on the interest rate, they are only looking for profit opportunities, and this is the key to another confusion that has always followed my thinkings about this topic.
If a guy opens a bar in an area of a town where many new buildings are being built during a "housing bubble" he may not know, but he is inserting himself right into the eye of that business cycle. He expects all these building projects to continue, and all the people involved in that to be getting paid more and be able to spend more at his bar and so on. That is a bet that may or may not end up paying.
Now what does that bar investiment has to do with the interest rate? Nothing. It is just a guy who saw a business opportunity in a place where hungry people with money had no bar to buy things in, so he opened a bar. Additionally the guy has made some calculations about all the ending, starting and future building projects in the area, and then the people that would live or work in that area afterwards (after all the buildings were being built with the expectation of being used) and so on, there is no interest rate calculations involved. And yet that may be a malinvestiment because some building projects will end up being canceled and the expected usage of the finished ones will turn out to be smaller than predicted.
This bubble may have been caused by a decline in interest rates that prompted some people to start buying houses that they wouldn't otherwise, but this is just a small detail. The bubble can only be kept going by a constant influx of new money into the economy, but the focus on the interest rate is wrong. If new money is printed and used by the government to buy ships then there will be a boom and a bubble in the ship market, and that involves all the parts of production process of ships and also bars that will be opened near areas of the town where ships are built and new people are being hired with higher salaries to do things that will eventually contribute to the production of ships that will then be sold to the government.
It's not interest rates or the length of the production process that matters, it's just printed money and malinvestiment.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28hledger-web
A Haskell app that uses Miso and hledger's Haskell libraries plus ghcjs to be compiled to a web page, and then adds optional remoteStorage so you can store your ledger data somewhere else.
This was my introduction to Haskell and also built at a time I thought remoteStorage was a good idea that solved many problems, and that it could use some help in the form of just yet another somewhat-useless-but-cool project using it that could be added to their wiki.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: clarity.fm on Lightning
Getting money from clients very easily, dispatching that money to "world class experts" (what a silly way to market things, but I guess it works) very easily are the job for Bitcoin and the Lightning Network.
EDIT 2020-09-04
My idea was that people would advertise themselves, so you would book an hour with people you know already, but it seems that clarify.fm has gone through the route of offering a "catalog of experts" to potential clients, all full of verification processes probably and marketing. So I guess this is not a thing I can do.
Actually I did https://s4a.etleneum.com/ (on Etleneum) that is somewhat similar, but of course doesn't have the glamour and network effect and marketing -- also it's just text, when in Clarity is fancy calls.
Thinking about it, this is just a simple and obvious idea: just copy things from the fiat world and make them on Lightning, but maybe it is still worth pointing these out as there are hundreds of developers out there trying to make yet another lottery game with Lightning.
It may also be a good idea to not just copy fiat-businesses models, but also change them experimenting with new paradigms, like idea: Patreon, but simple, and without subscription.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Carl R. Rogers sobre a ciência
Creio que o objetivo primário da ciência é fornecer uma hipótese, uma convicção e uma fé mais seguras e que satisfaçam melhor o próprio investigador. Na medida em que o cientista procura provar qualquer coisa a alguém -- um erro em que incorri mais de uma vez --, creio que ele está se servindo da ciência para remediar uma insegurança pessoal, desviando-a do seu verdadeiro papel criativo a serviço do indivíduo.
Tornar-se Pessoa, página aleatória
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28contratos.alhur.es
A website that allowed people to fill a form and get a standard Contrato de Locação.
Better than all the other "templates" that float around the internet, which are badly formatted
.doc
files.It was fully programmable so other templates could be added later, but I never did. This website made maybe one dollar in Google Ads (and Google has probably stolen these like so many other dollars they did with their bizarre requirements).
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Método científico
o método científico não pode ser aplicado senão numa meia dúzia de casos, e no entanto ei-nos aqui, pensando nele para tudo.
"formule hipóteses e teste-as independentemente", "obtenha uma quantidade de dados estatisticamente significante", teste, colete dados, mensure.
não é que de repente todo mundo resolveu calcular desvios-padrão, mas sim que é comum, para as pessoas mais cultas, nível Freakonomics, acharem que têm que testar e coletar dados, e nunca jamais confiar na sua "intuição" ou, pior, num raciocínio que pode parecer certo, mas na verdade é enormemente enganador.
sim, é verdade que raciocínios com explicações aparentemente sensatas nos são apresentados todos os dias -- para um exemplo fácil é só imaginar um comentarista de jornal, ou até uma matéria inocente de jornal, aliás, melhor pensar num comentarista da GloboNews --, e sim, é verdade que a maioria dessas explicações é falsa.
o que está errado é achar que só o que vale é testar hipóteses. você não pode testar a explicação aparentemente sensata que o taxista te fornece sobre a crise brasileira, deve então anotá-la para testar depois? mantê-la para sempre no cabedal das teorias ainda por testar?
e a explicação das redinhas que economizam água quando instaladas na torneira? essa dá pra testar, então você vai comprar um relógio de água e deixar a torneira ligada lá 5 horas com a redinha, depois 5 horas sem a redinha? obviamente não vai funcionar se você abrir o mesmo tanto, você vai precisar de um critério melhor: a satisfação da pessoa que está lavando as mãos com o resultado final versus a quantidade de água gasta. daí você precisaria de muitas pessoas, mas satisfação é uma coisa imensurável, nem adianta tentar fazer entrevistas antes e depois com as pessoas. o certo então, é o quê? procurar um estudo científico publicado numa revista de qualidade (porque tem aquelas revistas que aceitam estudos gerados por computador, então é melhor tomar cuidado) que fala sobre redinhas? como saber se a redinha é a mesma que você comprou? e agora que você já comprou, o resultado do experimento importa? (claro: pode ser que a redinha faça gastar mais água, você nunca saberá até que faça o experimento).
por que não, ao invés de condenar todos os raciocínios como enganadores e mandar que as pessoas façam experimentos científicos, ensinar a fazer raciocínios certos?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28comentário pertinente de Olavo de Carvalho sobre atribuições indevidas de acontecimentos à "ordem espontânea"
Eis aqui um exemplo entre outros mil, extraído das minhas apostilas de aulas, de como se analisam as relações entre fatores deliberados e casuais na ação histórica. O sr, Beltrão está INFINITAMENTE ABAIXO da possibilidade de discutir essas coisas, e por isso mesmo me atribui uma simploriedade que é dele próprio e não minha:
Já citei mil vezes este parágrafo de Georg Jellinek e vou citá-lo de novo: “Os fenômenos da vida social dividem-se em duas classes: aqueles que são determinados essencialmente por uma vontade diretriz e aqueles que existem ou podem existir sem uma organização devida a atos de vontade. Os primeiros estão submetidos necessariamente a um plano, a uma ordem emanada de uma vontade consciente, em oposição aos segundos, cuja ordenação repousa em forças bem diferentes.”
Essa distinção é crucial para os historiadores e os analistas estratégicos não porque ela é clara em todos os casos, mas precisamente porque não o é. O erro mais comum nessa ordem de estudos reside em atribuir a uma intenção consciente aquilo que resulta de uma descontrolada e às vezes incontrolável combinação de forças, ou, inversamente, em não conseguir enxergar, por trás de uma constelação aparentemente fortuita de circunstâncias, a inteligência que planejou e dirigiu sutilmente o curso dos acontecimentos.
Exemplo do primeiro erro são os Protocolos dos Sábios de Sião, que enxergam por trás de praticamente tudo o que acontece de mau no mundo a premeditação maligna de um número reduzidos de pessoas, uma elite judaica reunida secretamente em algum lugar incerto e não sabido.
O que torna essa fantasia especialmente convincente, decorrido algum tempo da sua publicação, é que alguns dos acontecimentos ali previstos se realizam bem diante dos nossos olhos. O leitor apressado vê nisso uma confirmação, saltando imprudentemente da observação do fato à imputação da autoria. Sim, algumas das idéias anunciadas nos Protocolos foram realizadas, mas não por uma elite distintamente judaica nem muito menos em proveito dos judeus, cuja papel na maioria dos casos consistiu eminentemente em pagar o pato. Muitos grupos ricos e poderosos têm ambições de dominação global e, uma vez publicado o livro, que em certos trechos tem lances de autêntica genialidade estratégica de tipo maquiavélico, era praticamente impossível que nada aprendessem com ele e não tentassem por em prática alguns dos seus esquemas, com a vantagem adicional de que estes já vinham com um bode expiatório pré-fabricado. Também é impossível que no meio ou no topo desses grupos não exista nenhum judeu de origem. Basta portanto um pouquinho de seletividade deformante para trocar a causa pelo efeito e o inocente pelo culpado.
Mas o erro mais comum hoje em dia não é esse. É o contrário: é a recusa obstinada de enxergar alguma premeditação, alguma autoria, mesmo por trás de acontecimentos notavelmente convergentes que, sem isso, teriam de ser explicados pela forca mágica das coincidências, pela ação de anjos e demônios, pela "mão invisível" das forças de mercado ou por hipotéticas “leis da História” ou “constantes sociológicas” jamais provadas, que na imaginação do observador dirigem tudo anonimamente e sem intervenção humana.
As causas geradoras desse erro são, grosso modo:
Primeira: Reduzir as ações humanas a efeitos de forças impessoais e anônimas requer o uso de conceitos genéricos abstratos que dão automaticamente a esse tipo de abordagem a aparência de coisa muito científica. Muito mais científica, para o observador leigo, do que a paciente e meticulosa reconstituição histórica das cadeias de fatos que, sob um véu de confusão, remontam às vezes a uma autoria inicial discreta e quase imperceptível. Como o estudo dos fenômenos histórico-políticos é cada vez mais uma ocupação acadêmica cujo sucesso depende de verbas, patrocínios, respaldo na mídia popular e boas relações com o establishment, é quase inevitável que, diante de uma questão dessa ordem, poucos resistam à tentação de matar logo o problema com duas ou três generalizações elegantes e brilhar como sábios de ocasião em vez de dar-se o trabalho de rastreamentos históricos que podem exigir décadas de pesquisa.
Segunda: Qualquer grupo ou entidade que se aventure a ações histórico-políticas de longo prazo tem de possuir não só os meios de empreendê-las, mas também, necessariamente, os meios de controlar a sua repercussão pública, acentuando o que lhe convém e encobrindo o que possa abortar os resultados pretendidos. Isso implica intervenções vastas, profundas e duradouras no ambiente mental. [Etc. etc. etc.]
(no facebook em 17 de julho de 2013)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: An open log-based HTTP database for any use case
A single, read/write open database for everything in the world.
- A hosted database that accepts anything you put on it and stores it in order.
- Anyone can update it by adding new stuff.
- To make sense of the data you can read only the records that interest you, in order, and reconstruct a local state.
- Each updater pays a fee (anonymously, in satoshis) to store their piece of data.
- It's a single store for everything in the world.
Cost and price estimates
Prices for guaranteed storage for 3 years: 20 satoshis = 1KB 20 000 000 = 1GB
https://www.elephantsql.com/ charges $10/mo for 1GB of data, 3 600 000 satoshis for 3 years
If 3 years is not enough, people can move their stuff to elsewhere when it's time, or pay to keep specific log entries for more time.
Other considerations
- People provide a unique id when adding a log so entries can be prefix-matched by it, like
myapp.something.random
- When fetching, instead of just fetching raw data, add (paid?) option to fetch and apply a
jq
map-reduce transformation to the matched entries