-
@ 6871d8df:4a9396c1
2024-02-24 22:42:16In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
@ f977c464:32fcbe00
2024-01-30 20:06:18Güneşin kaybolmasının üçüncü günü, saat öğlen on ikiyi yirmi geçiyordu. Trenin kalkmasına yaklaşık iki saat vardı. Hepimiz perondaydık. Valizlerimiz, kolilerimiz, renk renk ve biçimsiz çantalarımızla yan yana dizilmiş, kısa aralıklarla tepemizdeki devasa saati kontrol ediyorduk.
Ama ne kadar dik bakarsak bakalım zaman bir türlü istediğimiz hızla ilerlemiyordu. Herkes birkaç dakika sürmesi gereken alelade bir doğa olayına sıkışıp kalmış, karanlıktan sürünerek çıkmayı deniyordu.
Bekleme salonuna doğru döndüm. Nefesimden çıkan buharın arkasında, kalın taş duvarları ve camlarıyla morg kadar güvenli ve soğuk duruyordu. Cesetleri o yüzden bunun gibi yerlere taşımaya başlamışlardı. Demek insanların bütün iyiliği başkaları onları gördüğü içindi ki gündüzleri gecelerden daha karanlık olduğunda hemen birbirlerinin gırtlağına çökmüş, böğürlerinde delikler açmış, gözlerini oyup kafataslarını parçalamışlardı.
İstasyonun ışığı titrediğinde karanlığın enseme saplandığını hissettim. Eğer şimdi, böyle kalabalık bir yerde elektrik kesilse başımıza ne gelirdi?
İçerideki askerlerden biri bakışlarımı yakalayınca yeniden saate odaklanmış gibi yaptım. Sadece birkaç dakika geçmişti.
“Tarlalarım gitti. Böyle boyum kadar ayçiçeği doluydu. Ah, hepsi ölüp gidiyor. Afitap’ın çiçekleri de gi-”
“Dayı, Allah’ını seversen sus. Hepimizi yakacaksın şimdi.”
Karanlıkta durduğunda, görünmez olmayı istemeye başlıyordun. Kimse seni görmemeli, nefesini bile duymamalıydı. Kimsenin de ayağının altında dolaşmamalıydın; gelip kazayla sana çarpmamalılar, takılıp sendelememeliydiler. Yoksa aslında hedefi sen olmadığın bir öfke gürlemeye başlar, yaşadığın ilk şoku ve acıyı silerek üstünden geçerdi.
İlk konuşan, yaşlıca bir adam, kafasında kasketi, nasırlı ellerine hohluyordu. Gözleri ve burnu kızarmıştı. Güneşin kaybolması onun için kendi başına bir felaket değildi. Hayatına olan pratik yansımalarından korkuyordu olsa olsa. Bir anının kaybolması, bu yüzden çoktan kaybettiği birinin biraz daha eksilmesi. Hayatta kalmasını gerektiren sebepler azalırken, hayatta kalmasını sağlayacak kaynaklarını da kaybediyordu.
Onu susturan delikanlıysa atkısını bütün kafasına sarmış, sakalı ve yüzünün derinliklerine kaçmış gözleri dışında bedeninin bütün parçalarını gizlemeye çalışıyordu. İşte o, güneşin kaybolmasının tam olarak ne anlama geldiğini anlamamış olsa bile, dehşetini olduğu gibi hissedebilenlerdendi.
Güneşin onlardan alındıktan sonra kime verileceğini sormuyorlardı. En başta onlara verildiğinde de hiçbir soru sormamışlardı zaten.
İki saat ne zaman geçer?
Midemin üstünde, sağ tarafıma doğru keskin bir acı hissettim. Karaciğerim. Gözlerimi yumdum. Yanımda biri metal bir nesneyi yere bıraktı. Bir kafesti. İçerisindeki kartalın ıslak kokusu burnuma ulaşmadan önce bile biliyordum bunu.
“Yeniden mi?” diye sordu bana kartal. Kanatları kanlı. Zamanın her bir parçası tüylerinin üstüne çöreklenmişti. Gagası bir şey, tahminen et parçası geveliyor gibi hareket ediyordu. Eski anılar kolay unutulmazmış. Şu anda kafesinin kalın parmaklıklarının ardında olsa da bunun bir aldatmaca olduğunu bir tek ben biliyordum. Her an kanatlarını iki yana uzatıverebilir, hava bu hareketiyle dalgalanarak kafesi esneterek hepimizi içine alacak kadar genişleyebilir, parmaklıklar önce ayaklarımızın altına serilir gibi gözükebilir ama aslında hepimizin üstünde yükselerek tepemize çökebilirdi.
Aşağıya baktım. Tahtalarla zapt edilmiş, hiçbir yere gidemeyen ama her yere uzanan tren rayları. Atlayıp koşsam… Çantam çok ağırdı. Daha birkaç adım atamadan, kartal, suratını bedenime gömerdi.
“Bu sefer farklı,” diye yanıtladım onu. “Yeniden diyemezsin. Tekrarladığım bir şey değil bu. Hatta bir hata yapıyormuşum gibi tonlayamazsın da. Bu sefer, insanların hak etmediğini biliyorum.”
“O zaman daha vahim. Süzme salaksın demektir.”
“İnsanların hak etmemesi, insanlığın hak etmediği anlamına gelmez ki.”
Az önce göz göze geldiğim genççe ama çökük asker hâlâ bana bakıyordu. Bir kartalla konuştuğumu anlamamıştı şüphesiz. Yanımdakilerden biriyle konuştuğumu sanmış olmalıydı. Ama konuştuğum kişiye bakmıyordum ona göre. Çekingence kafamı eğmiştim. Bir kez daha göz göze geldiğimizde içerideki diğer iki askere bir şeyler söyledi, onlar dönüp beni süzerken dışarı çıktı.
Yanımızdaki, az önce konuşan iki adam da şaşkınlıkla bir bana bir kartala bakıyordu.
“Yalnız bu sefer kalbin de kırılacak, Prometheus,” dedi kartal, bana. “Belki son olur. Biliyorsun, bir sürü soruna neden oluyor bu yaptıkların.”
Beni koruyordu sözde. En çok kanıma dokunan buydu. Kasıklarımın üstüne oturmuş, kanlı suratının ardında gözleri parlarken attığı çığlık kulaklarımda titremeye devam ediyordu. Bu tabloda kimsenin kimseyi düşündüğü yoktu. Kartalın, yanımızdaki adamların, artık arkama kadar gelmiş olması gereken askerin, tren raylarının, geçmeyen saatlerin…
Arkamı döndüğümde, asker sahiden oradaydı. Zaten öyle olması gerekiyordu; görmüştüm bunu, biliyordum. Kehanetler… Bir şeyler söylüyordu ama ağzı oynarken sesi çıkmıyordu. Yavaşlamış, kendisini saatin akışına uydurmuştu. Havada donan tükürüğünden anlaşılıyordu, sinirliydi. Korktuğu için olduğunu biliyordum. Her seferinde korkmuşlardı. Beni unutmuş olmaları işlerini kolaylaştırmıyordu. Sadece yeni bir isim vermelerine neden oluyordu. Bu seferkiyle beni lanetleyecekleri kesinleşmişti.
Olması gerekenle olanların farklı olması ne kadar acınasıydı. Olması gerekenlerin doğasının kötücül olmasıysa bir yerde buna dayanıyordu.
“Salaksın,” dedi kartal bana. Zamanı aşan bir çığlık. Hepimizin önüne geçmişti ama kimseyi durduramıyordu.
Sonsuzluğa kaç tane iki saat sıkıştırabilirsiniz?
Ben bir tane bile sıkıştıramadım.
Çantama uzanıyordum. Asker de sırtındaki tüfeğini indiriyordu. Benim acelem yoktu, onunsa eli ayağı birbirine dolaşıyordu. Oysaki her şey tam olması gerektiği anda olacaktı. Kehanet başkasının parmaklarının ucundaydı.
Güneş, bir tüfeğin patlamasıyla yeryüzüne doğdu.
Rayların üzerine serilmiş göğsümün ortasından, bir çantanın içinden.
Not: Bu öykü ilk olarak 2021 yılında Esrarengiz Hikâyeler'de yayımlanmıştır.
-
@ 8ce092d8:950c24ad
2024-02-04 23:35:07Overview
- Introduction
- Model Types
- Training (Data Collection and Config Settings)
- Probability Viewing: AI Inspector
- Match
- Cheat Sheet
I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
Official Resources
- Official Documentation (Must Read): Everything You Need to Know About AI Arena
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game. 2. Official Play-2-Airdrop competition FAQ Site https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0 3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there. 4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
What is this game about?
- Although categorized as a platform fighting game, the core is a probability-based strategy game.
- Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
- The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
- Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.
Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
For all buckets, refer to this official documentation:
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
Video (no sound): The entire training process for all buckets
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
III. Training (Data Collection and Config Settings)
Training Process:
- Collect Data
- Set Parameters, Train, and Save
- Repeat Step 1 until the Model is Complete
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
1. What Kind of Data to Collect
- High-Quality Data: Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
- Balanced Data: Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
- Moderate Amount: A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away. * Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
Movement-Based Actions
Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only. * Note: Action collection should be clean; do not collect actions that involve leaving the stage.
Config Settings
In the Simple Config, you can directly choose "Movement" in it. However, for better customization, it's recommended to use the Advanced Config directly. - Intensity: The method for setting Intensity will be introduced separately later. - Buckets: As shown in the image, choose the bucket you are training. - Focus Area: Position-based parameters: - Your position (must) - Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
Combat-Based Actions
The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.
This involves 5 buckets: - 2 regular situations - In the air: 6 (Safe Zone) - On the ground: 4 (Opponent Active) - 3 special situations on the ground: - 1 Projectile Active - 2 Opponent Knockback - 3 Opponent Stunned
2 Regular Situations
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
Focus Area - Basic (generally select all) - Angle to opponent
- Distance to opponent - Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
3. Setting "Intensity"
Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24): https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
TL;DR:
Epochs: - Adjust to fewer epochs if learning is insufficient, increase for more learning.
Batch Size: - Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast - Increase (e.g., 64) if data is slightly imprecise but balanced. - If both imprecise and unbalanced, consider retraining.
Learning Rate: - Maximize (0.01) for more learning but a risk of forgetting past knowledge. - Minimize for more accurate learning with less impact on previous knowledge.
Lambda: - Reduce for prioritizing learning new things.
Data Cleaning: - Enable "Remove Sparsity" unless you want AI to learn idleness. - For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
Personal Experience: - Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled. - Prioritize Multistream, sometimes use Oversampling. - Fine-tune subsequent training based on the mentioned theories.
IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action." Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state. The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction. It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
Next
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
Explanation: In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield. Source
V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
-
Movement, i.e., Off the Stage: Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
-
Combat: Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
VI. Cheat Sheet
Training 1. Click "Collect" to collect actions. 2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map. 3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller). 4. Use "Collect" to make the opponent loop a set of actions. 5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
Inspector 1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions. 2. When satisfied with the training, click "Save." 3. In "Sparring" and "Simulation," use "Current Working Model." 4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
Sparring / Ranked 1. Use the Throneroom map only for the top 2 or top 10 rankings. 2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O Planetinha
Fumaça verde me entrando pelas narinas e um coro desafinado fazia uma base melódica.
nos confins da galáxia havia um planetinha isolado. Era um planeta feliz.
O homem vestido de mago começava a aparecer por detrás da fumaça verde.
O planetinha recebeu três presentes, mas o seu habitante, o homem, estava num estado de confusão tão grande que ameaçava estragá-los. Os homens já havia escravizado o primeiro presente, a vida; lutavam contra o segundo presente, a morte; e havia alguns que achavam que deviam destruir totalmente o terceiro, o amor, e com isto levar a desordem total ao pobre planetinha perdido, que se chamava Terra.
O coro desafinado entrou antes do "Terra" cantando várias vezes, como se imitasse um eco, "terra-terra-terraaa". Depois de uma pausa dramática, o homem vestido de mago voltou a falar.
Terra, nossa nave mãe.
Neste momento eu me afastei. À frente do palco onde o mago e seu coral faziam apelos à multidão havia vários estandes cobertos com a tradicional armação de quatro pernas e lona branca. Em todos os cantos da praça havia gente, gente dos mais variados tipos. Visitantes curiosos que se aproximavam atraídos pela fumaça verde e as barraquinhas, gente que aproveitava o movimento para vender doces sem pagar imposto, casais que se abraçavam de pé para espantar o frio, os tradicionais corredores que faziam seu cooper, gente cheia de barba e vestida para imitar os hippies dos anos 60 e vender colares estendidos no chão, transeuntes novos e velhos, vestidos como baladeiros ou como ativistas do ônibus grátis, grupos de ciclistas entusiastas.
O mago fazia agora apelos para que nós, os homens, habitantes do isolado planetinha, passássemos a ver o planetinha, nossa nave mãe, como um todo, e adquiríssemos a consciência de que ele estava entrando em maus lençóis. A idéia, reforçada pela logomarca do evento, era que parássemos de olhar só para a nossa vida e pensássemos no planeta.
A logomarca do evento, um desenho estilizado do planeta Terra, nada tinha a ver com seu nome: "Festival Andando de Bem com a Vida", mas havia sido ali colocada estrategicamente pelos organizadores, de quem parecia justamente sair a mensagem dita pelo mago.
Aquela multidão de pessoas que, assim como eu, tinham suas próprias preocupações, não podiam ver o quadro caótico que formavam, cada uma com seus atos isolados, ali naquela praça isolada, naquele planeta isolado. Quando o hippie barbudo, quase um Osho, assustava um casal para tentar vender-lhes um colar, a quantidade de caos que isto acrescentava à cena era gigantesca. Por um segundo, pude ver, como se estivesse de longe e acima, com toda a pretensão que este estado imaginativo carrega, a cena completa do caos.
Uma nave-mãe, dessas de ficção científica, habitada por milhões de pessoas, seguia no espaço sem rumo, e sem saber que logo à frente um longo precipício espacial a esperava, para a desgraça completa sua e de seus habitantes.
Acostumados àquela nave tanto quanto outrora estiveram acostumados à sua terra natal, os homens viviam as próprias vidas sem nem se lembrar que estavam vagando pelo espaço. Ninguém sabia quem estava conduzindo a nave, e ninguém se importava.
No final do filme descobre-se que era a soma completa do caos que cada habitante produzia, com seus gestos egoístas e incapazes de levar em conta a totalidade, é que determinava a direção da nave-mãe. O efeito, no entanto, não era imediato, como nunca é. Havia gente de verdade encarregada de conduzir a nave, mas era uma gente bêbada, mau-caráter, que vivia brigando pelo controle da nave e o poder que isto lhes dava. Poder, status, dinheiro!
Essa gente bêbada era atraída até ali pela corrupção das instituições e da moral comum que, no fundo no fundo, era causada pelo egoísmo da população, através de um complexo -- mas que no filme aparece simplificado pela ação individual de um magnata do divertimento público -- processo social.
O homem vestido de mago era mais um agente causador de caos, com sua cena cheia de fumaça e sua roupa estroboscópica, ele achava que estava fazendo o bem ao alertar sua platéia, todos as sextas-feiras, de que havia algo que precisava ser feito, que cada um que estava ali ouvindo era responsável pelo planeta. A sua incapacidade, porém, de explicar o que precisava ser feito só aumentava a angústia geral; a culpa que ele jogava sobre seu público, e que era prontamente aceita e passada em frente, aos familiares e amigos de cada um, atormentava-os diariamente e os impedia de ter uma vida decente no trabalho e em casa. As famílias, estressadas, estavam constantemente brigando e os motivos mais insignificantes eram responsáveis pelas mais horrendas conseqüências.
O mago, que após o show tirava o chapéu entortado e ia tomar cerveja num boteco, era responsável por uma parcela considerável do caos que levava a nave na direção do seu desgraçado fim. No filme, porém, um dos transeuntes que de passagem ouviu um pedaço do discurso do mago despertou em si mesmo uma consiência transformadora e, com poderes sobre-humanos que lhe foram então concedidos por uma ordem iniciática do bem ou não, usando só os seus poderes humanos mesmo, o transeunte -- na primeira versão do filme um homem, na segunda uma mulher -- consegue consertar as instituições e retirar os bêbados da condução da máquina. A questão da moral pública é ignorada para abreviar a trama, já com duas horas e quarenta de duração, mas subentende-se que ela também fora resolvida.
No planeta Terra real, que não está indo em direção alguma, preso pela gravidade ao Sol, e onde as pessoas vivem a própria vida porque lhes é impossível viver a dos outros, não têm uma consciência global de nada porque só é possível mesmo ter a consciência delas mesmas, e onde a maioria, de uma maneira ou de outra, está tentando como pode, fazer as coisas direito, o filme é exibido.
Para a maioria dos espectadores, é um filme que evoca reflexões, um filme forte. Por um segundo elas têm o mesmo vislumbre do caos generalizado que eu tive ali naquela praça. Para uma pequena parcela dos espectadores -- entre eles alguns dos que estavam na platéia do mago, o próprio mago, o seguidor do Osho, o casal de duas mulheres e o vendedor de brigadeiros, mas aos quais se somam também críticos de televisão e jornal e gente que fala pelos cotovelos na internet -- o filme é um horror, o filme é uma vulgarização de um problema real e sério, o filme apela para a figura do herói salvador e passa uma mensagem totalmente errada, de que a maioria da população pode continuar vivendo as suas própria vidinhas miseráveis enquanto espera por um herói que vem do Olimpo e os salva da mixórdia que eles mesmos causaram, é um filme que presta um enorme desserviço à causa.
No dia seguinte ao lançamento, num bar meio caro ali perto da praça, numa mesa com oito pessoas, entre elas seis do primeiro grupo e oito do segundo, discute-se se o filme levará ou não o Oscar. Eu estou em casa dormindo e não escuto nada.
-
@ 6ad3e2a3:c90b7740
2024-02-10 10:37:19I tend to post what I think, and sometimes people don’t appreciate some of what I’m saying. That’s okay — they are free to unfollow, unsubscribe, mute or even block! But occasionally, they will go farther and try actually to deter me from posting, telling others in the public square what I’m saying is “dangerous” or “harmful”. I’ve even been accused of “killing people” and having a “body count!”
A recent example of this happened a couple months ago when I posted the following observation about the colonoscopy procedure:
Somepeopletook exception to this post, despite the risk/benefit disclaimer, arguing it was dangerous because it could deter people from getting this potentially life-saving procedure. In other words, let’s say colonoscopies save lives, and if 100 people who read my post were convinced not to get one, maybe one of them would get a preventable form of cancer.
I understand the argument, but it’s a terrible one. For starters, each adult human being reading that post has agency. Whether he does or does not do something is never solely because he heard me make an observation. Anyone who reads a tweet from someone neither claiming special expertise in the subject matter nor even purporting to give advice as to risk/benefit and decides not to do something was almost certainly not going to do it anyway for a variety of reasons. I am not a puppet-master of other people with special powers to command them to do things.
To conclude someone is responsible for another adult’s behavior simply because of an observation is to invite all kinds of absurdities. If I post about how crunchy, salty and tangy a particular Doritos flavor is (I do not eat Doritos!), and someone struggling with a junk food addiction reads it and falls off the wagon, am I therefore responsible for the deleterious health effects of his binge?
If posting negative (even though accurate) observations about a health-promoting procedure makes us responsible for someone who subsequently forgoes that procedure, surely posting positive observations about unhealthy behaviors should be viewed similarly. Posting a photo of yourself with a glass of scotch and a cigar in a swanky Vegas lounge with the caption “My happy place” might just be the image that sends an alcoholic reaching for the bottle! Only virtuous, healthful and wholesome messages are permitted!
But could even a picture of yourself in tip-top shape at the beach, surrounded by your healthy and beautiful family cause single people to feel lonely and depressed? Better not post on social media at all. Better not even go out in public if you’re well-dressed and have a healthy and beautiful family. Virtually anything you do could trigger a negative emotion in someone, and who knows where it could lead?
And what if Donald Trump were to say something positive about colonoscopies? Surely, there’s a cohort of people who would reflexively start to question them because everything he says is bad. So even someone posting ostensibly healthful information could be killing people by virtue of their reaction to the poster himself.
But let’s set the absurd implications aside and address the argument on its merits. Even if we concede getting the colonoscopy would have led to early detection and a successful removal of colon cancer (which is not always the case), not getting the colonoscopy is certainly not the proximate cause of the cancer. It might be a but-for cause, but the proximate cause would be some combination of diet, environmental toxins and genetics. Surely, my tweet, no matter how much it stuck with him, cannot be credited with the yeoman’s work of industrial pollution, a pesticide, hormone and antibiotic laden corporate food supply and any risk-augmenting personal behaviors (smoking, eating a poor diet, not exercising) from the person himself.
Further, anyone declining an often free preventative medical procedure probably lacks trust in the medical system generally, and my tweet surely did not cause the medical establishment to offer such consistently wrongheaded advice during the covid era and squander so much of the good will and esteem in which people once held it.
Put differently, my posting about Doritos’ crunch can’t be isolated from the hundreds of millions in marketing from Pepsi-co subsidiary Frito Lay that preceded it.
But even this is not the most pertinent objection to the notion it’s wrong to express earnest observations about medical procedures or lifestyle choices. The primary reason people should feel free to share what they believe or observe is that the science is never all in. Just as eggs were once considered harmful due to their cholesterol content and margarine healthful for its lack thereof, we might one day discover that colonoscopies (and the removal of polyps during the procedure) are themselves sometimes the trigger for colon cancer.
Set aside whether that particular hypothetical mechanism during that particular procedure sounds farfetched, it isn’t hard to imagine more generally that many of the treatments deemed “dangerous” to question might not in fact be net beneficial.
The implications of this are twofold: (1) if someone were on the hook for observations that might deter ostensibly net beneficial procedures, should those procedures later turn out to have been net harmful, those who recommended them would similarly have to be on the hook for those harms. (And I am not talking about the medical establishment that should very much on the hook were that the case, but the ordinary person encouraging his friends and social media followers to follow establishment medical advice.)
And (2) if it were verboten to share earnest observations that contradict the establishment dictum, we would disable the very error-correction mechanism that allows us to update our body of knowledge. Put differently, the examples of expert consensus being wrong and in need of correction span the millennia, from Galileo challenging the geocentric paradigm of the church to Einstein updating Newton’s theories on gravity. Without the ability to question openly the knowledge of the day, whether scientific, medical or political, our myriad errors in attending to human affairs would not merely cause acute harm — they would be permanent.
In short, anyone who discourages you from expressing earnest observations is himself engaging in the most harmful speech of which we can conceive, for though his perceived end might be prevention of local harm, it actually precludes the possibility of progress itself.
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente. - **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados. - **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua. - **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer. - **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos. - **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas. - **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade). - **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis. - **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade. - **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho. - **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho. - **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma. - **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças. - **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família. - **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente. - **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo). - **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas. - **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente). - **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica. - **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar. - **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico. - **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais. - **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil). - **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória. - **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles. - **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas. - **inflação**: a inflação estimula o endividamento e o consumismo. - **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas. - **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
@ f977c464:32fcbe00
2024-01-11 18:47:47Kendisini aynada ilk defa gördüğü o gün, diğerleri gibi olduğunu anlamıştı. Oysaki her insan biricik olmalıydı. Sözgelimi sinirlendiğinde bir kaşı diğerinden birkaç milimetre daha az çatılabilirdi veya sevindiğinde dudağı ona has bir açıyla dalgalanabilirdi. Hatta bunların hiçbiri mümkün değilse, en azından, gözlerinin içinde sadece onun sahip olabileceği bir ışık parlayabilirdi. Çok sıradan, öyle sıradan ki kimsenin fark etmediği o milyonlarca minik şeyden herhangi biri. Ne olursa.
Ama yansımasına bakarken bunların hiçbirini bulamadı ve diğer günlerden hiç de farklı başlamamış o gün, işe gitmek için vagonunun gelmesini beklediği alelade bir metro istasyonunda, içinde kaybolduğu illüzyon dağılmaya başladı.
İlk önce derisi döküldü. Tam olarak dökülmedi aslında, daha çok kıvılcımlara dönüşüp bedeninden fırlamış ve bir an sonra sönerek külleşmiş, havada dağılmıştı. Ardında da, kaybolmadan hemen önce, kısa süre için hayal meyal görülebilen, bir ruhun yok oluşuna ağıt yakan rengârenk peri cesetleri bırakmıştı. Beklenenin aksine, havaya toz kokusu yayıldı.
Dehşete düştü elbette. Dehşete düştüler. Panikle üstlerini yırtan 50 işçi. Her şeyin sebebiyse o vagon.
Saçları da döküldü. Her tel, yere varmadan önce, her santimde ikiye ayrıla ayrıla yok oldu.
Bütün yüzeylerin mat olduğu, hiçbir şeyin yansımadığı, suyun siyah aktığı ve kendine ancak kameralarla bakabildiğin bir dünyada, vagonun içine yerleştirilmiş bir aynadan ilk defa kendini görmek.
Gözlerinin akları buharlaşıp havada dağıldı, mercekleri boşalan yeri doldurmak için eriyip yayıldı. Gerçeği görmemek için yaratılmış, bu yüzden görmeye hazır olmayan ve hiç olmayacak gözler.
Her şeyin o anda sona erdiğini sanabilirdi insan. Derin bir karanlık ve ölüm. Görmenin görmek olduğu o anın bitişi.
Ben geldiğimde ölmüşlerdi.
Yani bozulmuşlardı demek istiyorum.
Belleklerini yeni taşıyıcılara takmam mümkün olmadı. Fiziksel olarak kusursuz durumdaydılar, olmayanları da tamir edebilirdim ama tüm o hengamede kendilerini baştan programlamış ve girdilerini modifiye etmişlerdi.
Belleklerden birini masanın üzerinden ileriye savurdu. Hınca hınç dolu bir barda oturuyorlardı. O ve arkadaşı.
Sırf şu kendisini insan sanan androidler travma geçirip delirmesin diye neler yapıyoruz, insanın aklı almıyor.
Eliyle arkasını işaret etti.
Polislerin söylediğine göre biri vagonun içerisine ayna yerleştirmiş. Bu zavallılar da kapı açılıp bir anda yansımalarını görünce kafayı kırmışlar.
Arkadaşı bunların ona ne hissettirdiğini sordu. Yani o kadar bozuk, insan olduğunu sanan androidi kendilerini parçalamış olarak yerde görmek onu sarsmamış mıydı?
Hayır, sonuçta belirli bir amaç için yaratılmış şeyler onlar. Kaliteli bir bilgisayarım bozulduğunda üzülürüm çünkü parasını ben vermişimdir. Bunlarsa devletin. Bana ne ki?
Arkadaşı anlayışla kafasını sallayıp suyundan bir yudum aldı. Kravatını biraz gevşetti.
Bira istemediğinden emin misin?
İstemediğini söyledi. Sahi, neden deliriyordu bu androidler?
Basit. Onların yapay zekâlarını kodlarken bir şeyler yazıyorlar. Yazılımcılar. Biliyorsun, ben donanımdayım. Bunlar da kendilerini insan sanıyorlar. Tiplerine bak.
Sesini alçalttı.
Arabalarda kaza testi yapılan mankenlere benziyor hepsi. Ağızları burunları bile yok ama şu geldiğimizden beri sakalını düzeltip duruyor mesela. Hayır, hepsi de diğerleri onun sakalı varmış sanıyor, o manyak bir şey.
Arkadaşı bunun delirmeleriyle bağlantısını çözemediğini söyledi. O da normal sesiyle konuşmaya devam etti.
Anlasana, aynayı falan ayırt edemiyor mercekleri. Lönk diye kendilerini görüyorlar. Böyle, olduğu gibi...
Nedenmiş peki? Ne gerek varmış?
Ne bileyim be abicim! Ahiret soruları gibi.
Birasına bakarak dalıp gitti. Sonra masaya abanarak arkadaşına iyice yaklaştı. Bulanık, bir tünelin ucundaki biri gibi, şekli şemalı belirsiz bir adam.
Ben seni nereden tanıyorum ki ulan? Kimsin sen?
Belleği makineden çıkardılar. İki kişiydiler. Soruşturmadan sorumlu memurlar.
─ Baştan mı başlıyoruz, diye sordu belleği elinde tutan ilk memur.
─ Bir kere daha deneyelim ama bu sefer direkt aynayı sorarak başla, diye cevapladı ikinci memur.
─ Bence de. Yeterince düzgün çalışıyor.
Simülasyon yüklenirken, ayakta, biraz arkada duran ve alnını kaşıyan ikinci memur sormaktan kendisini alamadı:
─ Bu androidleri niye böyle bir olay yerine göndermişler ki? Belli tost olacakları. İsraf. Gidip biz baksak aynayı kırıp delilleri mahvetmek zorunda da kalmazlar.
Diğer memur sandalyesinde hafifçe dönecek oldu, o sırada soruyu bilgisayarın hoparlöründen teknisyen cevapladı.
Hangi işimizde bir yamukluk yok ki be abi.
Ama bir son değildi. Üstlerindeki tüm illüzyon dağıldığında ve çıplak, cinsiyetsiz, birbirinin aynı bedenleriyle kaldıklarında sıra dünyaya gelmişti.
Yere düştüler. Elleri -bütün bedeni gibi siyah turmalinden, boğumları çelikten- yere değdiği anda, metronun zemini dağıldı.
Yerdeki karolar öncesinde beyazdı ve çok parlaktı. Tepelerindeki floresan, ışığını olduğu gibi yansıtıyor, tek bir lekenin olmadığı ve tek bir tozun uçmadığı istasyonu aydınlatıyorlardı.
Duvarlara duyurular asılmıştı. Örneğin, yarın akşam kültür merkezinde 20.00’da başlayacak bir tekno blues festivalinin cıvıl cıvıl afişi vardı. Onun yanında daha geniş, sarı puntolu harflerle yazılmış, yatay siyah kesiklerle çerçevesi çizilmiş, bir platformdan düşen çöp adamın bulunduğu “Dikkat! Sarı bandı geçmeyin!” uyarısı. Biraz ilerisinde günlük resmi gazete, onun ilerisinde bir aksiyon filminin ve başka bir romantik komedi filminin afişleri, yapılacakların ve yapılmayacakların söylendiği küçük puntolu çeşitli duyurular... Duvar uzayıp giden bir panoydu. On, on beş metrede bir tekrarlanıyordu.
Tüm istasyonun eni yüz metre kadar. Genişliği on metre civarı.
Önlerinde, açık kapısından o mendebur aynanın gözüktüğü vagon duruyordu. Metro, istasyona sığmayacak kadar uzundu. Bir kılıcın keskinliğiyle uzanıyor ama yer yer vagonların ek yerleriyle bölünüyordu.
Hiçbir vagonda pencere olmadığı için metronun içi, içlerindekiler meçhuldü.
Sonrasında karolar zerrelerine ayrılarak yükseldi. Floresanın ışığında her yeri toza boğdular ve ortalığı gri bir sisin altına gömdüler. Çok kısa bir an. Afişleri dalgalandırmadılar. Dalgalandırmaya vakitleri olmadı. Yerlerinden söküp aldılar en fazla. Işık birkaç kere sönüp yanarak direndi. Son kez söndüğünde bir daha geri gelmedi.
Yine de etraf aydınlıktı. Kırmızı, her yere eşit dağılan soluk bir ışıkla.
Yer tamamen tele dönüşmüştü. Altında çapraz hatlarla desteklenmiş demir bir iskelet. Işık birkaç metreden daha fazla aşağıya uzanamıyordu. Sonsuzluğa giden bir uçurum.
Duvarın yerini aynı teller ve demir iskelet almıştı. Arkasında, birbirine vidalarla tutturulmuş demir plakalardan oluşan, üstünden geçen boruların ek yerlerinden bazen ince buharların çıktığı ve bir süre asılı kaldıktan sonra ağır, yağlı bir havayla sürüklendiği bir koridor.
Diğer tarafta paslanmış, pencerelerindeki camlar kırıldığı için demir plakalarla kapatılmış külüstür bir metro. Kapının karşısındaki aynadan her şey olduğu gibi yansıyordu.
Bir konteynırın içini andıran bir evde, gerçi gayet de birbirine eklenmiş konteynırlardan oluşan bir şehirde “andıran” demek doğru olmayacağı için düpedüz bir konteynırın içinde, masaya mum görüntüsü vermek için koyulmuş, yarı katı yağ atıklarından şekillendirilmiş kütleleri yakmayı deniyordu. Kafasında hayvan kıllarından yapılmış grili siyahlı bir peruk. Aynı kıllardan kendisine gür bir bıyık da yapmıştı.
Üstünde mavi çöp poşetlerinden yapılmış, kravatlı, şık bir takım.
Masanın ayakları yerine oradan buradan çıkmış parçalar konulmuştu: bir arabanın şaft mili, üst üste konulmuş ve üstünde yazı okunamayan tenekeler, boş kitaplar, boş gazete balyaları... Hiçbir şeye yazı yazılmıyordu, gerek yoktu da zaten çünkü merkez veri bankası onları fark ettirmeden, merceklerden giren veriyi sentezleyerek insanlar için dolduruyordu. Yani, androidler için. Farklı şekilde isimlendirmek bir fark yaratacaksa.
Onların mercekleri için değil. Bağlantıları çok önceden kopmuştu.
─ Hayatım, sofra hazır, diye bağırdı yatak odasındaki karısına.
Sofrada tabak yerine düz, bardak yerine bükülmüş, çatal ve bıçak yerine sivriltilmiş plakalar.
Karısı salonun kapısında durakladı ve ancak kulaklarına kadar uzanan, kocasınınkine benzeyen, cansız, ölü hayvanların kıllarından ibaret peruğunu eliyle düzeltti. Dudağını, daha doğrusu dudağının olması gereken yeri koyu kırmızı bir yağ tabakasıyla renklendirmeyi denemişti. Biraz da yanaklarına sürmüştü.
─ Nasıl olmuş, diye sordu.
Sesi tek düzeydi ama hafif bir neşe olduğunu hissettiğinize yemin edebilirdiniz.
Üzerinde, çöp poşetlerinin içini yazısız gazete kağıtlarıyla doldurarak yaptığı iki parça giysi.
─ Çok güzelsin, diyerek kravatını düzeltti kocası.
─ Sen de öylesin, sevgilim.
Yaklaşıp kocasını öptü. Kocası da onu. Sonra nazikçe elinden tutarak, sandalyesini geriye çekerek oturmasına yardım etti.
Sofrada yemek niyetine hiçbir şey yoktu. Gerek de yoktu zaten.
Konteynırın kapısı gürültüyle tekmelenip içeri iki memur girene kadar birbirlerine öyküler anlattılar. O gün neler yaptıklarını. İşten erken çıkıp yemyeşil çimenlerde gezdiklerini, uçurtma uçurduklarını, kadının nasıl o elbiseyi bulmak için saatlerce gezip yorulduğunu, kocasının kısa süreliğine işe dönüp nasıl başarılı bir hamleyle yaşanan krizi çözdüğünü ve kadının yanına döndükten sonra, alışveriş merkezinde oturdukları yeni dondurmacının dondurmalarının ne kadar lezzetli olduğunu, boğazlarının ağrımasından korktuklarını...
Akşam film izleyebilirlerdi, televizyonda -boş ve mat bir plaka- güzel bir film oynayacaktı.
İki memur. Çıplak bedenleriyle birbirinin aynı. Ellerindeki silahları onlara doğrultmuşlardı. Mum ışığında, tertemiz bir örtünün serili olduğu masada, bardaklarında şaraplarla oturan ve henüz sofranın ortasındaki hindiye dokunmamış çifti gördüklerinde bocaladılar.
Hiç de androidlere bilinçli olarak zarar verebilecek gibi gözükmüyorlardı.
─ Sessiz kalma hakkına sahipsiniz, diye bağırdı içeri giren ikinci memur. Söylediğiniz her şey...
Cümlesini bitiremedi. Yatak odasındaki, masanın üzerinden gördüğü o şey, onunla aynı hareketleri yapan android, yoksa, bir aynadaki yansıması mıydı?
Bütün illüzyon o anda dağılmaya başladı.
Not: Bu öykü ilk olarak 2020 yılında Esrarengiz Hikâyeler'de yayımlanmıştır.
-
@ 8fb140b4:f948000c
2023-11-21 21:37:48Embarking on the journey of operating your own Lightning node on the Bitcoin Layer 2 network is more than just a tech-savvy endeavor; it's a step into a realm of financial autonomy and cutting-edge innovation. By running a node, you become a vital part of a revolutionary movement that's reshaping how we think about money and digital transactions. This role not only offers a unique perspective on blockchain technology but also places you at the heart of a community dedicated to decentralization and network resilience. Beyond the technicalities, it's about embracing a new era of digital finance, where you contribute directly to the network's security, efficiency, and growth, all while gaining personal satisfaction and potentially lucrative rewards.
In essence, running your own Lightning node is a powerful way to engage with the forefront of blockchain technology, assert financial independence, and contribute to a more decentralized and efficient Bitcoin network. It's an adventure that offers both personal and communal benefits, from gaining in-depth tech knowledge to earning a place in the evolving landscape of cryptocurrency.
Running your own Lightning node for the Bitcoin Layer 2 network can be an empowering and beneficial endeavor. Here are 10 reasons why you might consider taking on this task:
-
Direct Contribution to Decentralization: Operating a node is a direct action towards decentralizing the Bitcoin network, crucial for its security and resistance to control or censorship by any single entity.
-
Financial Autonomy: Owning a node gives you complete control over your financial transactions on the network, free from reliance on third-party services, which can be subject to fees, restrictions, or outages.
-
Advanced Network Participation: As a node operator, you're not just a passive participant but an active player in shaping the network, influencing its efficiency and scalability through direct involvement.
-
Potential for Higher Revenue: With strategic management and optimal channel funding, your node can become a preferred route for transactions, potentially increasing the routing fees you can earn.
-
Cutting-Edge Technological Engagement: Running a node puts you at the forefront of blockchain and bitcoin technology, offering insights into future developments and innovations.
-
Strengthened Network Security: Each new node adds to the robustness of the Bitcoin network, making it more resilient against attacks and failures, thus contributing to the overall security of the ecosystem.
-
Personalized Fee Structures: You have the flexibility to set your own fee policies, which can balance earning potential with the service you provide to the network.
-
Empowerment Through Knowledge: The process of setting up and managing a node provides deep learning opportunities, empowering you with knowledge that can be applied in various areas of blockchain and fintech.
-
Boosting Transaction Capacity: By running a node, you help to increase the overall capacity of the Lightning Network, enabling more transactions to be processed quickly and at lower costs.
-
Community Leadership and Reputation: As an active node operator, you gain recognition within the Bitcoin community, which can lead to collaborative opportunities and a position of thought leadership in the space.
These reasons demonstrate the impactful and transformative nature of running a Lightning node, appealing to those who are deeply invested in the principles of bitcoin and wish to actively shape its future. Jump aboard, and embrace the journey toward full independence. 🐶🐾🫡🚀🚀🚀
-
-
@ 3bf0c63f:aefa459d
2024-01-29 02:19:25Nostr: a quick introduction, attempt #1
Nostr doesn't have a material existence, it is not a website or an app. Nostr is just a description what kind of messages each computer can send to the others and vice-versa. It's a very simple thing, but the fact that such description exists allows different apps to connect to different servers automatically, without people having to talk behind the scenes or sign contracts or anything like that.
When you use a Nostr client that is what happens, your client will connect to a bunch of servers, called relays, and all these relays will speak the same "language" so your client will be able to publish notes to them all and also download notes from other people.
That's basically what Nostr is: this communication layer between the client you run on your phone or desktop computer and the relay that someone else is running on some server somewhere. There is no central authority dictating who can connect to whom or even anyone who knows for sure where each note is stored.
If you think about it, Nostr is very much like the internet itself: there are millions of websites out there, and basically anyone can run a new one, and there are websites that allow you to store and publish your stuff on them.
The added benefit of Nostr is that this unified "language" that all Nostr clients speak allow them to switch very easily and cleanly between relays. So if one relay decides to ban someone that person can switch to publishing to others relays and their audience will quickly follow them there. Likewise, it becomes much easier for relays to impose any restrictions they want on their users: no relay has to uphold a moral ground of "absolute free speech": each relay can decide to delete notes or ban users for no reason, or even only store notes from a preselected set of people and no one will be entitled to complain about that.
There are some bad things about this design: on Nostr there are no guarantees that relays will have the notes you want to read or that they will store the notes you're sending to them. We can't just assume all relays will have everything — much to the contrary, as Nostr grows more relays will exist and people will tend to publishing to a small set of all the relays, so depending on the decisions each client takes when publishing and when fetching notes, users may see a different set of replies to a note, for example, and be confused.
Another problem with the idea of publishing to multiple servers is that they may be run by all sorts of malicious people that may edit your notes. Since no one wants to see garbage published under their name, Nostr fixes that by requiring notes to have a cryptographic signature. This signature is attached to the note and verified by everybody at all times, which ensures the notes weren't tampered (if any part of the note is changed even by a single character that would cause the signature to become invalid and then the note would be dropped). The fix is perfect, except for the fact that it introduces the requirement that each user must now hold this 63-character code that starts with "nsec1", which they must not reveal to anyone. Although annoying, this requirement brings another benefit: that users can automatically have the same identity in many different contexts and even use their Nostr identity to login to non-Nostr websites easily without having to rely on any third-party.
To conclude: Nostr is like the internet (or the internet of some decades ago): a little chaotic, but very open. It is better than the internet because it is structured and actions can be automated, but, like in the internet itself, nothing is guaranteed to work at all times and users many have to do some manual work from time to time to fix things. Plus, there is the cryptographic key stuff, which is painful, but cool.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Parallel Chains
We want merged-mined blockchains. We want them because it is possible to do things in them that aren't doable in the normal Bitcoin blockchain because it is rightfully too expensive, but there are other things beside the world money that could benefit from a "distributed ledger" -- just like people believed in 2013 --, like issued assets and domain names (just the most obvious examples).
On the other hand we can't have -- like people believed in 2013 -- a copy of Bitcoin for every little idea with its own native token that is mined by proof-of-work and must get off the ground from being completely valueless into having some value by way of a miracle that operated only once with Bitcoin.
It's also not a good idea to have blockchains with custom merged-mining protocol (like Namecoin and Rootstock) that require Bitcoin miners to run their software and be an active participant and miner for that other network besides Bitcoin, because it's too cumbersome for everybody.
Luckily Ruben Somsen invented this protocol for blind merged-mining that solves the issue above. Although it doesn't solve the fact that each parallel chain still needs some form of "native" token to pay miners -- or it must use another method that doesn't use a native token, such as trusted payments outside the chain.
How does it work
With the
SIGHASH_NOINPUT
/SIGHASH_ANYPREVOUT
soft-fork[^eltoo] it becomes possible to create presigned transactions that aren't related to any previous UTXO.Then you create a long sequence of transactions (sufficient to last for many many years), each with an
nLockTime
of 1 and each spending the next (you create them from the last to the first). Since theirscriptSig
(the unlocking script) will useSIGHASH_ANYPREVOUT
you can obtain a transaction id/hash that doesn't include the previous TXO, you can, for example, in a sequence of transactionsA0-->B
(B spends output 0 from A), include the signature for "spending A0 on B" inside thescriptPubKey
(the locking script) of "A0".With the contraption described above it is possible to make that long string of transactions everybody will know (and know how to generate) but each transaction can only be spent by the next previously decided transaction, no matter what anyone does, and there always must be at least one block of difference between them.
Then you combine it with
RBF
,SIGHASH_SINGLE
andSIGHASH_ANYONECANPAY
so parallel chain miners can add inputs and outputs to be able to compete on fees by including their own outputs and getting change back while at the same time writing a hash of the parallel block in the change output and you get everything working perfectly: everybody trying to spend the same output from the long string, each with a different parallel block hash, only the highest bidder will get the transaction included on the Bitcoin chain and thus only one parallel block will be mined.See also
[^eltoo]: The same thing used in Eltoo.
-
@ 8fb140b4:f948000c
2023-11-18 23:28:31Chef's notes
Serving these two dishes together will create a delightful centerpiece for your Thanksgiving meal, offering a perfect blend of traditional flavors with a homemade touch.
Details
- ⏲️ Prep time: 30 min
- 🍳 Cook time: 1 - 2 hours
- 🍽️ Servings: 4-6
Ingredients
- 1 whole turkey (about 12-14 lbs), thawed and ready to cook
- 1 cup unsalted butter, softened
- 2 tablespoons fresh thyme, chopped
- 2 tablespoons fresh rosemary, chopped
- 2 tablespoons fresh sage, chopped
- Salt and freshly ground black pepper
- 1 onion, quartered
- 1 lemon, halved
- 2-3 cloves of garlic
- Apple and Sage Stuffing
- 1 loaf of crusty bread, cut into cubes
- 2 apples, cored and chopped
- 1 onion, diced
- 2 stalks celery, diced
- 3 cloves garlic, minced
- 1/4 cup fresh sage, chopped
- 1/2 cup unsalted butter
- 2 cups chicken broth
- Salt and pepper, to taste
Directions
- Preheat the Oven: Set your oven to 325°F (165°C).
- Prepare the Herb Butter: Mix the softened butter with the chopped thyme, rosemary, and sage. Season with salt and pepper.
- Prepare the Turkey: Remove any giblets from the turkey and pat it dry. Loosen the skin and spread a generous amount of herb butter under and over the skin.
- Add Aromatics: Inside the turkey cavity, place the quartered onion, lemon halves, and garlic cloves.
- Roast: Place the turkey in a roasting pan. Tent with aluminum foil and roast. A general guideline is about 15 minutes per pound, or until the internal temperature reaches 165°F (74°C) at the thickest part of the thigh.
- Rest and Serve: Let the turkey rest for at least 20 minutes before carving.
- Next: Apple and Sage Stuffing
- Dry the Bread: Spread the bread cubes on a baking sheet and let them dry overnight, or toast them in the oven.
- Cook the Vegetables: In a large skillet, melt the butter and cook the onion, celery, and garlic until soft.
- Combine Ingredients: Add the apples, sage, and bread cubes to the skillet. Stir in the chicken broth until the mixture is moist. Season with salt and pepper.
- Bake: Transfer the stuffing to a baking dish and bake at 350°F (175°C) for about 30-40 minutes, until golden brown on top.
-
@ 8fb140b4:f948000c
2023-11-02 01:13:01Testing a brand new YakiHonne native client for iOS. Smooth as butter (not penis butter 🤣🍆🧈) with great visual experience and intuitive navigation. Amazing work by the team behind it! * lists * work
Bold text work!
Images could have used nostr.build instead of raw S3 from us-east-1 region.
Very impressive! You can even save the draft and continue later, before posting the long-form note!
🐶🐾🤯🤯🤯🫂💜
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Blockchains are not decentralized data storage
People are used to saying and thinking that blockchains provide immutable data storage. Then many times they add a caveat that says blockchains are very expensive, so we can't really store too much data on them, but we can still store some data if we really want and are ok with paying for it.
But the fact is that blockchains cannot ever be used to store anything. The purpose of blockchains is to keep track of some state that everybody must agree upon at all times, and arbitrary data that anyone may have wanted to backup there is not relevant to anyone else[^relevant] and thus there are no incentives for anyone else to keep track of that. In other words: if you backup your personal pictures as
OP_RETURN
outputs on Bitcoin, people may delete that and your backup will be void[^op-return-invalid-outputs].Another thing blockchains supposedly do is to "broadcast" something. For example, nodes may delete the
OP_RETURN
outputs, but at least they have to verify these first, and spread they over the network, so you can broadcast your data and be sure everybody will get it. About this we can say two things: 1, if this happens, it's not a property of blockchains, but of the Bitcoin transaction sharing network that operates outside of the blockchain. 2, if you try to use that network for purposes that are irrelevant for the functioning of the Bitcoin protocol there is no incentive for other nodes to cooperate and they may ignore you.The above points may sound weird and you may be prompted to answer: but you can do all that today and there is no actual mechanism to stop anyone from broadcasting irrelevant crap!, and that is true. My point here is only that if you're thinking about blockchains as being this data-broadcast-storage mechanism you're thinking about them wrong, that is not an essential part of any blockchain. In other words: the incentives are not aligned for blockchains to be used like that (unless you come up with a scheme that makes data from everyone else to be relevant to everybody), in the long term such things are not expected to work and insisting on doing them will result in either your application or protocol that stores data on the blockchain to crash or in the death of the given blockchain (I hope Bitcoin haters don't read this).
(This is a counterpoint to myself on idea: Rumple, which was a protocol idea that relied on a blockchain storing irrelevant data.)
[^relevant]: For example, all Bitcoin transactions are relevant to all Bitcoin users because as a user the total supply and the ausence of double-spends are relevant, and also the fact that any of these transactions may end up being ancestors of transactions that you might receive in the future. [^op-return-invalid-outputs]: Of course you can still backup your pictures as invalid
P2PKH
outputs or something like that, then it will be harder for people to spot your data as irrelevant, but this is not a feature, it's a bug of Bitcoin that enables someone to spam other nodes in a way they can't detect it. If people started doing this a lot it would break Bitcoin. -
@ fa0165a0:03397073
2023-10-06 19:25:08I just tested building a browser plugin, it was easier than I thought. Here I'll walk you through the steps of creating a minimal working example of a browser plugin, a.k.a. the "Hello World" of browser plugins.
First of all there are two main browser platforms out there, Chromium and Mozilla. They do some things a little differently, but similar enough that we can build a plugin that works on both. This plugin will work in both, I'll describe the firefox version, but the chromium version is very similar.
What is a browser plugin?
Simply put, a browser plugin is a program that runs in the browser. It can do things like modify the content of a webpage, or add new functionality to the browser. It's a way to extend the browser with custom functionality. Common examples are ad blockers, password managers, and video downloaders.
In technical terms, they are plugins that can insert html-css-js into your browser experience.
How to build a browser plugin
Step 0: Basics
You'll need a computer, a text editor and a browser. For testing and development I personally think that the firefox developer edition is the easiest to work with. But any Chrome based browser will also do.
Create a working directory on your computer, name it anything you like. I'll call mine
hello-world-browser-plugin
. Open the directory and create a file calledmanifest.json
. This is the most important file of your plugin, and it must be named exactly right.Step 1: manifest.json
After creation open your file
manifest.json
in your text editor and paste the following code:json { "manifest_version": 3, "name": "Hello World", "version": "1.0", "description": "A simple 'Hello World' browser extension", "content_scripts": [ { "matches": ["<all_urls>"], "js": ["hello.js"] //The name of your script file. // "css": ["hello.css"] //The name of your css file. } ] }
If you wonder what the
json
file format is, it's a normal text file with a special syntax such that a computer can easily read it. It's thejson
syntax you see in the code above. Let's go through what's being said here. (If you are not interested, just skip to the next step after pasting this we are done here.)manifest_version
: This is the version of the manifest file format. It's currently at version 3, and it's the latest version. It's important that you set this to 3, otherwise your plugin won't work.name
: This is the name of your plugin. It can be anything you like.version
: This is the version of your plugin. It can be anything you like.description
: This is the description of your plugin. It can be anything you like.content_scripts
: This is where you define what your plugin does. It's a list of scripts that will be executed when the browser loads a webpage. In this case we have one script, calledhello.js
. It's the script that we'll create in the next step.matches
: This is a list of urls that the script will be executed on. In this case we have<all_urls>
, which means that the script will be executed on all urls. You can also specify a specific url, likehttps://brave.com/*
, which means that the script will only be executed on urls that start withhttps://brave.com/
.js
: This is a list of javascript files that will be executed. In this case we have one file, calledhello.js
. It's the script that we'll create in the next step.css
: This is where you can add a list of css files that will be executed. In this case we have none, but you can add css files here if you want to.//
: Text following these two characters are comments. They are ignored by the computer, You can add comments anywhere you like, and they are a good way to document your code.
Step 2: hello.js
Now it's time to create another file in your project folder. This time we'll call it
hello.js
. When created, open it in your text editor and paste the following code:js console.log("Hello World!");
That's javascript code, and it's what will be executed when you run your plugin. It's a simpleconsole.log
statement, which will print the text "Hello World!" to the console. The console is a place where the browser prints out messages, and it's a good place to start when debugging your plugin.Step 3: Load and launch your plugin
Firefox
Now it's time to load your plugin into your browser. Open your browser and go to the url
about:debugging#/runtime/this-firefox
. You should see a page that looks something like this:Click the button that says "Load Temporary Add-on...". A file dialog will open, navigate to your project folder and select the file
manifest.json
. Your plugin should now be loaded and running.Go to a website, any website, and open the inspector then navigate to the console. You'll find the inspector by right-clicking anywhere within the webpage, and click "Inspector" in the drop-down menu. When opening the console you might see some log messages from the site you visited and... you should see the text "Hello World!" printed there, from our little plugin! Congratulations!
Chrome
Open your browser and go to the url
chrome://extensions/
. Click the button that says "Load unpacked". A file dialog will open, navigate to your project folder and select the folderhello-world-browser-plugin
. Your plugin should now be loaded and running.Note the difference, of selecting the file
manifest.json
in firefox, and selecting the folderhello-world-browser-plugin
in chrome. Otherwise, the process is the same. So I'll repeat the same text as above: (for those who skipped ahead..)Go to a website, any website, and open the inspector then navigate to the console. You'll find the inspector by right-clicking anywhere within the webpage, and click "Inspector" in the drop-down menu. When opening the console you might see some log messages from the site you visited and... you should see the text "Hello World!" printed there, from our little plugin! Congratulations!
As you can see this isn't as complicated as one might think. Having preformed a "Hello-World!"-project is a very useful and valuable first step. These setup steps are the basics for any browser plugin, and you can build on this to create more advanced plugins.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Timeu
Os quatro elementos, a esfera como a forma mais perfeita, os cinco sentidos, a dor como perturbação e o prazer como retorno, o demiurgo que cria da melhor maneira possível com a matéria que tem, o conceito de duro e mole, todas essas coisas que ensinam nas escolas e nos desenhos animados ou sei lá como entram na nossa consciência como se fossem uma verdade, mas sempre uma verdade provisória, infantil -- como os nomes infantis dos dedos (mata-piolho, fura-bolo etc.) --, que mesmo as crianças sabem que não é verdade mesmo.
Parece que todas essas coisas estão nesse livro. Talvez até mesmo a classificação dos cinco dedos como mata-piolho e tal, mas talvez eu tenha dormido nessa parte.
Me pergunto se essas coisas não eram ensinadas tradicionalmente na idade média como sendo verdade absoluta (pois afinal estava lá o Platão dizendo, em sua única obra) e persistiram até hoje numa tradição que se mantém aos trancos e barrancos, contra tudo e contra todos, sem ninguém saber como, um conhecimento em que ninguém acredita mas acha bonito mesmo assim, harmonioso, e vem despida de suas origens e fontes primárias e de todo o seu contexto perturbar o entendimento do mundo pelas crianças.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Rede Relâmpago
Ao se referir à Lightning Network do O que é Bitcoin?, nós, brasileiros e portugueses, devemos usar o termo "Relâmpago" ou "Rede Relâmpago". "Relâmpago" é uma palavra bonita e apropriada, e fácil de pronunciar por todos os nossos compatriotas. Chega de anglicismos desnecessários.
Exemplo de uma conversa hipotética no Brasil usando esta nomenclatura:
– Posso pagar com Relâmpago? – Opa, claro! Vou gerar um boleto aqui pra você.
Repare que é bem mais natural e fácil do que a outra alternativa:
– Posso pagar com láitenim? – Leite ninho?
-
@ 8fb140b4:f948000c
2023-08-22 12:14:34As the title states, scratch behind my ear and you get it. 🐶🐾🫡
-
@ 8fb140b4:f948000c
2023-07-30 00:35:01Test Bounty Note
-
@ 8fb140b4:f948000c
2023-07-22 09:39:48Intro
This short tutorial will help you set up your own Nostr Wallet Connect (NWC) on your own LND Node that is not using Umbrel. If you are a user of Umbrel, you should use their version of NWC.
Requirements
You need to have a working installation of LND with established channels and connectivity to the internet. NWC in itself is fairly light and will not consume a lot of resources. You will also want to ensure that you have a working installation of Docker, since we will use a docker image to run NWC.
- Working installation of LND (and all of its required components)
- Docker (with Docker compose)
Installation
For the purpose of this tutorial, we will assume that you have your lnd/bitcoind running under user bitcoin with home directory /home/bitcoin. We will also assume that you already have a running installation of Docker (or docker.io).
Prepare and verify
git version - we will need git to get the latest version of NWC. docker version - should execute successfully and show the currently installed version of Docker. docker compose version - same as before, but the version will be different. ss -tupln | grep 10009- should produce the following output: tcp LISTEN 0 4096 0.0.0.0:10009 0.0.0.0: tcp LISTEN 0 4096 [::]:10009 [::]:**
For things to work correctly, your Docker should be version 20.10.0 or later. If you have an older version, consider installing a new one using instructions here: https://docs.docker.com/engine/install/
Create folders & download NWC
In the home directory of your LND/bitcoind user, create a new folder, e.g., "nwc" mkdir /home/bitcoin/nwc. Change to that directory cd /home/bitcoin/nwc and clone the NWC repository: git clone https://github.com/getAlby/nostr-wallet-connect.git
Creating the Docker image
In this step, we will create a Docker image that you will use to run NWC.
- Change directory to
nostr-wallet-connect
:cd nostr-wallet-connect
- Run command to build Docker image:
docker build -t nwc:$(date +'%Y%m%d%H%M') -t nwc:latest .
(there is a dot at the end) - The last line of the output (after a few minutes) should look like
=> => naming to docker.io/library/nwc:latest
nwc:latest
is the name of the Docker image with a tag which you should note for use later.
Creating docker-compose.yml and necessary data directories
- Let's create a directory that will hold your non-volatile data (DB):
mkdir data
- In
docker-compose.yml
file, there are fields that you want to replace (<> comments) and port “4321” that you want to make sure is open (check withss -tupln | grep 4321
which should return nothing). - Create
docker-compose.yml
file with the following content, and make sure to update fields that have <> comment:
version: "3.8" services: nwc: image: nwc:latest volumes: - ./data:/data - ~/.lnd:/lnd:ro ports: - "4321:8080" extra_hosts: - "localhost:host-gateway" environment: NOSTR_PRIVKEY: <use "openssl rand -hex 32" to generate a fresh key and place it inside ""> LN_BACKEND_TYPE: "LND" LND_ADDRESS: localhost:10009 LND_CERT_FILE: "/lnd/tls.cert" LND_MACAROON_FILE: "/lnd/data/chain/bitcoin/mainnet/admin.macaroon" DATABASE_URI: "/data/nostr-wallet-connect.db" COOKIE_SECRET: <use "openssl rand -hex 32" to generate fresh secret and place it inside ""> PORT: 8080 restart: always stop_grace_period: 1m
Starting and testing
Now that you have everything ready, it is time to start the container and test.
- While you are in the
nwc
directory (important), execute the following command and check the log output,docker compose up
- You should see container logs while it is starting, and it should not exit if everything went well.
- At this point, you should be able to go to
http://<ip of the host where nwc is running>:4321
and get to the interface of NWC - To stop the test run of NWC, simply press
Ctrl-C
, and it will shut the container down. - To start NWC permanently, you should execute
docker compose up -d
, “-d” tells Docker to detach from the session. - To check currently running NWC logs, execute
docker compose logs
to run it in tail mode add-f
to the end. - To stop the container, execute
docker compose down
That's all, just follow the instructions in the web interface to get started.
Updating
As with any software, you should expect fixes and updates that you would need to perform periodically. You could automate this, but it falls outside of the scope of this tutorial. Since we already have all of the necessary configuration in place, the update execution is fairly simple.
- Change directory to the clone of the git repository,
cd /home/bitcoin/nwc/nostr-wallet-connect
- Run command to build Docker image:
docker build -t nwc:$(date +'%Y%m%d%H%M') -t nwc:latest .
(there is a dot at the end) - Change directory back one level
cd ..
- Restart (stop and start) the docker compose config
docker compose down && docker compose up -d
- Done! Optionally you may want to check the logs:
docker compose logs
-
@ 3bf0c63f:aefa459d
2024-01-15 11:15:06Anglicismos estúpidos no português contemporâneo
Palavras e expressões que ninguém deveria usar porque não têm o sentido que as pessoas acham que têm, são apenas aportuguesamentos de palavras inglesas que por nuances da história têm um sentido ligeiramente diferente em inglês.
Cada erro é acompanhado também de uma sugestão de como corrigi-lo.
Palavras que existem em português com sentido diferente
- submissão (de trabalhos): envio, apresentação
- disrupção: perturbação
- assumir: considerar, pressupor, presumir
- realizar: perceber
- endereçar: tratar de
- suporte (ao cliente): atendimento
- suportar (uma idéia, um projeto): apoiar, financiar
- suportar (uma função, recurso, característica): oferecer, ser compatível com
- literacia: instrução, alfabetização
- convoluto: complicado.
- acurácia: precisão.
- resiliência: resistência.
Aportuguesamentos desnecessários
- estartar: iniciar, começar
- treidar: negociar, especular
Expressões
- "não é sobre...": "não se trata de..."
Ver também
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Trello Attachment Editor
A static JS app that allowed you to authorize with your Trello account, fetch the board structure, find attachments, edit them in the browser then replace them in the cards.
Quite a nice thing. I believe it was done to help with Websites For Trello attached scripts and CSS files.
See also
-
@ d2e97f73:ea9a4d1b
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Community
I was an avid IPFS user until yesterday. Many many times I asked simple questions for which I couldn't find an answer on the internet in the #ipfs IRC channel on Freenode. Most of the times I didn't get an answer, and even when I got it was rarely by someone who knew IPFS deeply. I've had issues go unanswered on js-ipfs repositories for year – one of these was raising awareness of a problem that then got fixed some months later by a complete rewrite, I closed my own issue after realizing that by myself some couple of months later, I don't think the people responsible for the rewrite were ever acknowledge that he had fixed my issue.
Some days ago I asked some questions about how the IPFS protocol worked internally, sincerely trying to understand the inefficiencies in finding and fetching content over IPFS. I pointed it would be a good idea to have a drawing showing that so people would understand the difficulties (which I didn't) and wouldn't be pissed off by the slowness. I was told to read the whitepaper. I had already the whitepaper, but read again the relevant parts. The whitepaper doesn't explain anything about the DHT and how IPFS finds content. I said that in the room, was told to read again.
Before anyone misread this section, I want to say I understand it's a pain to keep answering people on IRC if you're busy developing stuff of interplanetary importance, and that I'm not paying anyone nor I have the right to be answered. On the other hand, if you're developing a super-important protocol, financed by many millions of dollars and a lot of people are hitting their heads against your software and there's no one to help them; you're always busy but never delivers anything that brings joy to your users, something is very wrong. I sincerely don't know what IPFS developers are working on, I wouldn't doubt they're working on important things if they said that, but what I see – and what many other users see (take a look at the IPFS Discourse forum) is bugs, bugs all over the place, confusing UX, and almost no help.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A flexibilidade da doutrina socialista
Os fatos da revolução russa mostram que Lênin e seus amigos bolcheviques não eram só psicopatas assassinos: eles realmente acreditavam que estavam fazendo o certo.
Talvez depois de um tempo o foco deles tenha mudado mais para o lado de se preocuparem menos com a vida e o bem-estar dos outros do que com eles mesmos, mas não houve uma mudança fundamental.
Ao mesmo tempo, a doutrina socialista na qual eles acreditavam era enormemente flexível, assim como a dos esquerdistas de hoje. É a mesma doutrina: uma coleção de slogans que pode ser adaptada para apoiar ou ir contra qualquer outra tese ou ação.
Me parece que a justificativa que eles encontraram para fazer tantas coisas claramente ruins vem dessas mesma flexibilidade. Os atos cruéis estavam todos justificados pela mesma coleção de slogans socialistas de sempre, apenas adaptados às circunstâncias.
Será que uma doutrina mais sólida se prestaria a essas atrocidades? Se concluirmos que a flexibilidade vem da mente e não da doutrina em si, sim, mas não acho que venha daí, porque é sempre o socialismo que é flexível, nunca nenhuma outra doutrina. Ou, na verdade, o socialismo é tão flexível que ele envolve e integra qualquer outra doutrina que seja minimamente compatível.
Talvez a flexibilidade esteja mesmo na mente, mas existe alguma relação entre a mente que desconhece a coerência e a lógica e a mente que se deixa atrair pelos slogans socialistas.
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of "hashrate escrow" and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by the BIP-301 mechanism or by the Spacechains mechanism.
How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the Withdrawal Transaction (
WT^
) that will spend it for 6 months. There is one of these special addresses for each sidechain.To gather miners' agreement
bitcoind
keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of oneWT^
by 1 while decreasing the score of all others by 1; or they can decrease the score of allWT^
s by 1; or they can do nothing.Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that
WT^
is discarded.What does the above procedure mean?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain
WT^
, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.Now the crucial part: the validity of the
WT^
is not verified by the Bitcoin mainchain rules, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrongWT^
that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of theWT^
is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to theWT^
transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
Why bother with this at all?
Drivechains solve many different problems:
It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never actually tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in satoshis per byte. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a
WT^
that takes the money from the sidechains and pays to themselves.Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
- The cost of ceasing blind-merged mining returns -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
- The cost of Bitcoin price going down: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
- The cost of coordination -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
- The cost of miners leaving your mining pool: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
- The cost of community goodwill -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from good sidechains. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule similar to this one to invalidate the inclusion of a faulty
WT^
and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.This suggestion has made people think Drivechain is a sidechain solution backed by user-actived soft-forks for safety, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a good sidechain that will mean the failure of the Drivechain experiment. It will very likely also mean the failure of the Bitcoin experiment too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a very convoluted process that requires them to run a RSK full node. For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
See also
-
@ 82341f88:fbfbe6a2
2023-04-11 19:36:53There’s a lot of conversation around the #TwitterFiles. Here’s my take, and thoughts on how to fix the issues identified.
I’ll start with the principles I’ve come to believe…based on everything I’ve learned and experienced through my past actions as a Twitter co-founder and lead:
- Social media must be resilient to corporate and government control.
- Only the original author may remove content they produce.
- Moderation is best implemented by algorithmic choice.
The Twitter when I led it and the Twitter of today do not meet any of these principles. This is my fault alone, as I completely gave up pushing for them when an activist entered our stock in 2020. I no longer had hope of achieving any of it as a public company with no defense mechanisms (lack of dual-class shares being a key one). I planned my exit at that moment knowing I was no longer right for the company.
The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves. This burdened the company with too much power, and opened us to significant outside pressure (such as advertising budgets). I generally think companies have become far too powerful, and that became completely clear to me with our suspension of Trump’s account. As I’ve said before, we did the right thing for the public company business at the time, but the wrong thing for the internet and society. Much more about this here: https://twitter.com/jack/status/1349510769268850690
I continue to believe there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time. Of course mistakes were made. But if we had focused more on tools for the people using the service rather than tools for us, and moved much faster towards absolute transparency, we probably wouldn’t be in this situation of needing a fresh reset (which I am supportive of). Again, I own all of this and our actions, and all I can do is work to make it right.
Back to the principles. Of course governments want to shape and control the public conversation, and will use every method at their disposal to do so, including the media. And the power a corporation wields to do the same is only growing. It’s critical that the people have tools to resist this, and that those tools are ultimately owned by the people. Allowing a government or a few corporations to own the public conversation is a path towards centralized control.
I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable. Content takedowns and suspensions should not be possible. Doing so complicates important context, learning, and enforcement of illegal activity. There are significant issues with this stance of course, but starting with this principle will allow for far better solutions than we have today. The internet is trending towards a world were storage is “free” and infinite, which places all the actual value on how to discover and see content.
Which brings me to the last principle: moderation. I don’t believe a centralized system can do content moderation globally. It can only be done through ranking and relevance algorithms, the more localized the better. But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines. There’s a default “G-rated” algorithm, and then there’s everything else one can imagine.
The only way I know of to truly live up to these 3 principles is a free and open protocol for social media, that is not owned by a single company or group of companies, and is resilient to corporate and government influence. The problem today is that we have companies who own both the protocol and discovery of content. Which ultimately puts one person in charge of what’s available and seen, or not. This is by definition a single point of failure, no matter how great the person, and over time will fracture the public conversation, and may lead to more control by governments and corporations around the world.
I believe many companies can build a phenomenal business off an open protocol. For proof, look at both the web and email. The biggest problem with these models however is that the discovery mechanisms are far too proprietary and fixed instead of open or extendable. Companies can build many profitable services that complement rather than lock down how we access this massive collection of conversation. There is no need to own or host it themselves.
Many of you won’t trust this solution just because it’s me stating it. I get it, but that’s exactly the point. Trusting any one individual with this comes with compromises, not to mention being way too heavy a burden for the individual. It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I do still wish for Twitter, and every company, to become uncomfortably transparent in all their actions, and I wish I forced more of that years ago. I do believe absolute transparency builds trust. As for the files, I wish they were released Wikileaks-style, with many more eyes and interpretations to consider. And along with that, commitments of transparency for present and future actions. I’m hopeful all of this will happen. There’s nothing to hide…only a lot to learn from. The current attacks on my former colleagues could be dangerous and doesn’t solve anything. If you want to blame, direct it at me and my actions, or lack thereof.
As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, nostr another, Mastodon yet another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet. I believe this is critical both to Twitter’s future, and the public conversation’s ability to truly serve the people, which helps hold governments and corporations accountable. And hopefully makes it all a lot more fun and informative again.
💸🛠️🌐 To accelerate open internet and protocol work, I’m going to open a new category of #startsmall grants: “open internet development.” It will start with a focus of giving cash and equity grants to engineering teams working on social media and private communication protocols, bitcoin, and a web-only mobile OS. I’ll make some grants next week, starting with $1mm/yr to Signal. Please let me know other great candidates for this money.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Programming quibbles
- About CouchDB
- my personal approach on using
let
,const
andvar
in javascript - A crappy course on torrents
- Multi-service Graph Reputation protocol
- The monolithic approach to CouchDB views
- My stupid introduction to Haskell
- The unit test bubble
- There's a problem with using Git concepts for everything
- On the state of programs and browsers
- Rust's
.into()
is a strictly bad thing
-
@ 3bf0c63f:aefa459d
2024-01-14 14:52:16bitcoind
decentralizationIt is better to have multiple curator teams, with different vetting processes and release schedules for
bitcoind
than a single one."More eyes on code", "Contribute to Core", "Everybody should audit the code".
All these points repeated again and again fell to Earth on the day it was discovered that Bitcoin Core developers merged a variable name change from "blacklist" to "blocklist" without even discussing or acknowledging the fact that that innocent pull request opened by a sybil account was a social attack.
After a big lot of people manifested their dissatisfaction with that event on Twitter and on GitHub, most Core developers simply ignored everybody's concerns or even personally attacked people who were complaining.
The event has shown that:
1) Bitcoin Core ultimately rests on the hands of a couple maintainers and they decide what goes on the GitHub repository[^pr-merged-very-quickly] and the binary releases that will be downloaded by thousands; 2) Bitcoin Core is susceptible to social attacks; 2) "More eyes on code" don't matter, as these extra eyes can be ignored and dismissed.
Solution:
bitcoind
decentralizationIf usage was spread across 10 different
bitcoind
flavors, the network would be much more resistant to social attacks to a single team.This has nothing to do with the question on if it is better to have multiple different Bitcoin node implementations or not, because here we're basically talking about the same software.
Multiple teams, each with their own release process, their own logo, some subtle changes, or perhaps no changes at all, just a different name for their
bitcoind
flavor, and that's it.Every day or week or month or year, each flavor merges all changes from Bitcoin Core on their own fork. If there's anything suspicious or too leftist (or perhaps too rightist, in case there's a leftist
bitcoind
flavor), maybe they will spot it and not merge.This way we keep the best of both worlds: all software development, bugfixes, improvements goes on Bitcoin Core, other flavors just copy. If there's some non-consensus change whose efficacy is debatable, one of the flavors will merge on their fork and test, and later others -- including Core -- can copy that too. Plus, we get resistant to attacks: in case there is an attack on Bitcoin Core, only 10% of the network would be compromised. the other flavors would be safe.
Run Bitcoin Knots
The first example of a
bitcoind
software that follows Bitcoin Core closely, adds some small changes, but has an independent vetting and release process is Bitcoin Knots, maintained by the incorruptible Luke DashJr.Next time you decide to run
bitcoind
, run Bitcoin Knots instead and contribute tobitcoind
decentralization!
See also:
[^pr-merged-very-quickly]: See PR 20624, for example, a very complicated change that could be introducing bugs or be a deliberate attack, merged in 3 days without time for discussion.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28microanalytics
A replacement for Google Analytics that run inside a CouchDB, when CouchDB still was a potential platform for hosting of simple apps and easily distribution of apps with data.
It also had a CLI app for browsing the data with nice CLI charts.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Castas hindus em nova chave
Shudras buscam o máximo bem para os seus próprios corpos; vaishyas o máximo bem para a sua própria vida terrena e a da sua família; kshatriyas o máximo bem para a sociedade e este mundo terreno; brâmanes buscam o máximo bem.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The problem with ION
ION is a DID method based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
One thing I know is:
- ION is supposed to provide a way to rotate keys seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also not a blockchain, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that you can't solve the double-spend problem without a central authority or a kind of blockchain (i.e. a decentralized system with deterministic consensus).
- Rotating keys also suffer from the double-spend problem: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thoughts on Nostr key management
On Why I don't like NIP-26 as a solution for key management I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- NIP-41 (stateless key invalidation)
- NIP-46 (Nostr Connect)
- NIP-07 (signer browser extension)
- Connected hardware signing devices
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28rosetta.alhur.es
A service that grabs code samples from two chosen languages on RosettaCode and displays them side-by-side.
The code-fetching is done in real time and snippet-by-snippet (there is also a prefetch of which snippets are available in each language, so we only compare apples to apples).
This was my first Golang web application if I remember correctly.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
See also
- Bitcoin, the only non-shitcoin
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Economics
Just a bunch of somewhat-related notes.
- notes on "Economic Action Beyond the Extent of the Market", Per Bylund
- Mises' interest rate theory
- Profits, not wages, as the originary factor
- Reisman on opportunity cost
- Money Supply Measurement
- Per Bylund's insight
- Maybe a new approach to the Austrian Business Cycle Theory, some disorganized thoughts
- An argument according to which fractional-reserve banking is merely theft and nothing else
- Conjecture and criticism
- Qual é o economista? (piadas)
- UBI calculations
- Donations on the internet
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28tempreites
My first library to get stars on GitHub, was a very stupid templating library that used just HTML and HTML attributes ("DSL-free"). I was inspired by http://microjs.com/ at the time and ended up not using the library. Probably no one ever did.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28lnurl-auth explained
You may have seen the lnurl-auth spec or heard about it, but might not know how it works or what is its relationship with other lnurl protocols. This document attempts to solve that.
Relationship between lnurl-auth and other lnurl protocols
First, what is the relationship of lnurl-auth with other lnurl protocols? The answer is none, except the fact that they all share the lnurl format for specifying
https
URLs.In fact, lnurl-auth is very unique in the sense that it doesn't even need a Lightning wallet to work, it is a standalone authentication protocol that can work anywhere.
How does it work
Now, how does it work? The basic idea is that each wallet has a seed, which is a random value (you may think of the BIP39 seed words, for example). Usually from that seed different keys are derived, each of these yielding a Bitcoin address, and also from that same seed may come the keys used to generate and manage Lightning channels.
What lnurl-auth does is to generate a new key from that seed, and from that a new key for each service (identified by its domain) you try to authenticate with.
That way, you effectively have a new identity for each website. Two different services cannot associate your identities.
The flow goes like this: When you visit a website, the website presents you with a QR code containing a callback URL and a challenge. The challenge should be a random value.
When your wallet scans or opens that QR code it uses the domain in the callback URL plus the main lnurl-auth key to derive a key specific for that website, uses that key to sign the challenge and then sends both the public key specific for that for that website plus the signed challenge to the specified URL.
When the service receives the public key it checks it against the challenge signature and start a session for that user. The user is then identified only by its public key. If the service wants it can, of course, request more details from the user, associate it with an internal id or username, it is free to do anything. lnurl-auth's goals end here: no passwords, maximum possible privacy.
FAQ
-
What is the advantage of tying this to Bitcoin and Lightning?
One big advantage is that your wallet is already keeping track of one seed, it is already a precious thing. If you had to keep track of a separate auth seed it would be arguably worse, more difficult to bootstrap the protocol, and arguably one of the reasons similar protocols, past and present, weren't successful.
-
Just signing in to websites? What else is this good for?
No, it can be used for authenticating to installable apps and physical places, as long as there is a service running an HTTP server somewhere to read the signature sent from the wallet. But yes, signing in to websites is the main problem to solve here.
-
Phishing attack! Can a malicious website proxy the QR from a third website and show it to the user to it will steal the signature and be able to login on the third website?
No, because the wallet will only talk to the the callback URL, and it will either be controlled by the third website, so the malicious won't see anything; or it will have a different domain, so the wallet will derive a different key and frustrate the malicious website's plan.
-
I heard SQRL had that same idea and it went nowhere.
Indeed. SQRL in its first version was basically the same thing as lnurl-auth, with one big difference: it was vulnerable to phishing attacks (see above). That was basically the only criticism it got everywhere, so the protocol creators decided to solve that by introducing complexity to the protocol. While they were at it they decided to add more complexity for managing accounts and so many more crap that in the the spec which initially was a single page ended up becoming 136 pages of highly technical gibberish. Then all the initial network effect it had, libraries and apps were trashed and nowadays no one can do anything with it (but, see, there are still people who love the protocol writing in a 90's forum with no clue of anything besides their own Java).
-
We don't need this, we need WebAuthn!
WebAuthn is essentially the same thing as lnurl-auth, but instead of being simple it is complex, instead of being open and decentralized it is centralized in big corporations, and instead of relying on a key generated by your own device it requires an expensive hardware HSM you must buy and trust the manufacturer. If you like WebAuthn and you like Bitcoin you should like lnurl-auth much more.
-
What about BitID?
This is another one that is very similar to lnurl-auth, but without the anti-phishing prevention and extra privacy given by making one different key for each service.
-
What about LSAT?
It doesn't compete with lnurl-auth. LSAT, as far as I understand it, is for when you're buying individual resources from a server, not authenticating as a user. Of course, LSAT can be repurposed as a general authentication tool, but then it will lack features that lnurl-auth has, like the property of having keys generated independently by the user from a common seed and a standard way of passing authentication info from one medium to another (like signing in to a website at the desktop from the mobile phone, for example).
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nix
Pra instalar o neuron fui forçado a baixar e instalar o nix. Não consegui me lembrar por que não estava usando até hoje aquele maravilhoso sistema de instalar pacotes desde a primeira vez que tentei, anos atrás.
Que sofrimento pra fazer funcionar com o
fish
, mas até que bem menos sofrimento que da outra vez. Tive que instalar um tal defish-foreign-environment
(usando o próprio nix!, já que a outra opção era ooh-my-fish
ou qualquer outra porcaria dessas) e aí usá-lo para aplicar as definições de shell para bash direto nofish
.E aí lembrei também que o
/nix/store
fica cheio demais, o negócio instala tudo que existe neste mundo a partir do zero. É só para computadores muito ricos, mas vamos ver como vai ser. Estou gostando do neuron (veja, estou usando como diário), então vou ter que deixar o nix aí. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O Bitcoin como um sistema social humano
Afinal de contas, o que é o Bitcoin? Não vou responder a essa pergunta explicando o que é uma "blockchain" ou coisa que o valha, como todos fazem muito pessimamente. A melhor explicação em português que eu já vi está aqui, mas mesmo assim qualquer explicação jamais será definitiva.
A explicação apenas do protocolo, do que faz um programa
bitcoind
sendo executado em um computador e como ele se comunica com outros em outros computadores, e os incentivos que estão em jogo para garantir com razoável probabilidade que se chegará a um consenso sobre quem é dono de qual parte de qual transação, apesar de não ser complicada demais, exigirá do iniciante que seja compreendida muitas vezes antes que ele se possa se sentir confortável para dizer que entende um pouco.E essa parte técnica, apesar de ter sido o insight fundamental que gerou o evento miraculoso chamado Bitcoin, não é a parte mais importante, hoje. Se fosse, várias dessas outras moedas seriam concorrentes do Bitcoin, mas não são, e jamais poderão ser, porque elas não estão nem próximas de ter os outros elementos que compõem o Bitcoin. São eles:
- A estrutura
O Bitcoin é um sistema composto de partes independentes.
Existem programadores que trabalham no protocolo e aplicações, e dia após dia novos programadores chegam e outros saem, e eles trabalham às vezes em conjunto, às vezes sem que um se dê conta do outro, às vezes por conta própria, às vezes pagos por empresas interessadas.
Existem os usuários que realizam validação completa, isto é, estão rodando algum programa do Bitcoin e contribuindo para a difusão dos blocos, das transações, rejeitando usuários malignos e evitando ataques de mineradores mal-intencionados.
Existem os poupadores, acumuladores ou os proprietários de bitcoins, que conhecem as possibilidades que o mundo reserva para o Bitcoin, esperam o dia em que o padrão-Bitcoin será uma realidade mundial e por isso mesmo atributem aos seus bitcoins valores muito mais altos do que os preços atuais de mercado, agarrando-se a eles.
Especuladores de "criptomoedas" não fazem parte desse sistema, nem tampouco empresas que aceitam pagamento em bitcoins para imediatamente venderem tudo em troca de dinheiro estatal, e menos ainda gente que usa bitcoins e a própria marca Bitcoin para aplicar seus golpes e coisas parecidas.
- A cultura
Mencionei que há empresas que pagam programadores para trabalharem no código aberto do BitcoinCore ou de outros programas relacionados à rede Bitcoin -- ou mesmo em aplicações não necessariamente ligadas à camada fundamental do protocolo. Nenhuma dessas empresas interessadas, porém, controla o Bitcoin, e isso é o elemento principal da cultura do Bitcoin.
O propósito do Bitcoin sempre foi ser uma rede aberta, sem chefes, sem política envolvida, sem necessidade de pedir autorização para participar. O fato do próprio Satoshi Nakamoto ter voluntariamente desaparecido das discussões foi fundamental para que o Bitcoin não fosse visto como um sistema dependente dele ou que ele fosse entendido como o chefe. Em outras "criptomoedas" nada disso aconteceu. O chefe supremo do Ethereum continua por aí mandando e desmandando e inventando novos elementos para o protocolo que são automaticamente aceitos por toda a comunidade, o mesmo vale para o Zcash, EOS, Ripple, Litecoin e até mesmo para o Bitcoin Cash. Pior ainda: Satoshi Nakamoto saiu sem nenhum dinheiro, nunca mexeu nos milhares de bitcoins que ele gerou nos primeiros blocos -- enquanto os líderes dessas porcarias supramencionadas cobraram uma fortuna pelo direito de uso dos seus primeiros usuários ou estão aí a até hoje receber dividendos.
Tudo isso e mais outras coisas -- a mentalidade anti-estatal e entusiasta de sistemas p2p abertos dos membros mais proeminentes da comunidade, por exemplo -- faz com que um ar de liberdade e suspeito de tentativas de centralização da moeda sejam percebidos e execrados.
- A história
A noção de que o Bitcoin não pode ser controlado por ninguém passou em 2017 por dois testes e saiu deles muito reforçada: o primeiro foi a divisão entre Bitcoin (BTC) e Bitcoin Cash (BCH), uma obra de engenharia social que teve um sucesso mediano em roubar parte da marca e dos usuários do verdadeiro Bitcoin e depois a tentativa de tomada por completo do Bitcoin promovida por mais ou menos as mesmas partes interessadas chamada SegWit2x, que fracassou por completo, mas não sem antes atrapalhar e difundir mentiras para todos os lados. Esses dois fracassos provaram que o Bitcoin, mesmo sendo uma comunidade desorganizada, sem líderes claros, está imune à captura por grupos interessados, o que é mais um milagre -- ou, como dizem, um ponto de Schelling.
Esse período crucial na história do Bitcoin fez com ficasse claro que hard-forks são essencialmente incompatíveis com a natureza do protocolo, de modo que no futuro não haverá a possibilidade de uma sugestão como a de imprimir mais bitcoins do que o que estava programado sejam levadas a sério (mas, claro, sempre há a possibilidade da cultura toda se perder, as pessoas esquecerem a história e o Bitcoin ser cooptado, eis a importância da auto-educação e da difusão desses princípios).
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
em scrap para T.P.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A biblioteca infinita
Agora esqueci o nome do conto de Jorge Luis Borges em que a tal biblioteca é descrita, ou seus detalhes específicos. Eu tinha lido o conto e nunca havia percebido que ele matava a questão da aleatoriedade ser capaz de produzir coisas valiosas. Precisei mesmo da Wikipédia me dizer isso.
Alguns anos atrás levantei essa questão para um grupo de amigos sem saber que era uma questão tão batida e baixa. No meu exemplo era um cachorro andando sobre letras desenhadas e não um macaco numa máquina de escrever. A minha conclusão da discussão foi que não importa o que o cachorro escrevesse, sem uma inteligência capaz de compreender aquilo nada passaria de letras aleatórias.
Borges resolve tudo imaginando uma biblioteca que contém tudo o que o cachorro havia escrito durante todo o infinito em que fez o experimento, e portanto contém todo o conhecimento sobre tudo e todas as obras literárias possíveis -- mas entre cada página ou frase muito boa ou pelo menos legívei há toneladas de livros completamente aleatórios e uma pessoa pode passar a vida dentro dessa biblioteca que contém tanto conhecimento importante e mesmo assim não aprender nada porque nunca vai achar os livros certos.
Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.
Tenho a impressão de que a publicação gigantesca de artigos, posts, livros e tudo o mais está transformando o mundo nessa biblioteca. Há tanta coisa pra ler que é difícil achar o que presta. As pessoas precisam parar de escrever.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28My stupid introduction to Haskell
While I was writing my first small program on Haskell (really simple, but functional webapp) in December 2017 I only knew vaguely what was the style of things, some basic notions about functions, pure functions and so on (I've read about a third of LYAH).
An enourmous amount of questions began to appear in my head while I read tutorials and documentation. Here I present some of the questions and the insights I got that solved them. Technically, they may be wrong, but they helped me advance in the matter, so I'm writing them down while I still can -- If I keep working with Haskell I'll probably get to know more and so my new insights will replace the previous ones, and the new ones won't be useful for total begginers anymore.
Here we go:
- Why do modules have odd names?
- modules are the things you import, like
Data.Time.Clock
orWeb.Scotty
. - packages are the things you install, like 'time' or 'scotty'
- packages can contain any number of modules they like
- a module is just a collection of functions
- a package is just a collection of modules
- a package is just name you choose to associate your collection of modules with when you're publishing it to Hackage or whatever
- the module names you choose when you're writing a package can be anything, and these are the names people will have to
import
when they want to use you functions- if you're from Javascript, Python or anything similar, you'll expect to be importing/writing the name of the package directly in your code, but in Haskell you'll actually be writing the name of the module, which may have nothing to do with the name of the package
- people choose things that make sense, like for
aeson
instead ofimport Aeson
you'll be doingimport Data.Aeson
,import Data.Aeson.Types
etc. why theData
? because they thought it would be nice. dealing with JSON is a form of dealing with data, so be it. - you just have to check the package documentation to see which modules it exposes.
- What is
data User = User { name :: Text }
? - a data type definition. means you'll have a function
User
that will take a Text parameter and output aUser
record or something like that. - you can also have
Animal = Giraffe { color :: Text } | Human { name :: Text }
, so you'll have two functions, Giraffe and Human, each can take a different set of parameters, but they will both yield an Animal.- then, in the functions that take an Animal parameter you must typematch to see if the animal is a giraffe or a human.
- What is a monad?
- a monad is a context, an environment.
- when you're in the context of a monad you can write imperative code.
- you do that when you use the keyword
do
. - in the context of a monad, all values are prefixed by the monad type,
- thus, in the
IO
monad allText
isIO Text
and so on. - some monads have a relationship with others, so values from that monad can be turned into values from another monad and passed between context easily.
- for exampĺe, scotty's
ActionM
andIO
.ActionM
is just a subtype ofIO
or something like that. - when you write imperative code inside a monad you can do assignments like
varname <- func x y
- in these situations some transformation is done by the
<-
, I believe it is that the pure value returned byfunc
is being transformed into a monad value. so iffunc
returnsText
, now varname is of typeIO Text
(if we're in the IO monad).- so it will not work (and it can be confusing) if you try to concatenate functions like
varname <- transform $ func x y
, but you can somehow do varname <- func x y
othervarname <- transform varname
- or you can do other fancy things you'll get familiar with later, like
varname <- fmap transform $ func x y
- why? I don't know.
- so it will not work (and it can be confusing) if you try to concatenate functions like
- How do I deal with Maybe, Either or other crazy stuff? "ok, I understand what is a Maybe: it is a value that could be something or nothing. but how do I use that in my program?"
- you don't! you turn it into other thing. for example, you use fromMaybe, a function that takes a default value and that's it. if your
Maybe
isJust x
you getx
, if it isNothing
you get the default value.- using only that function you can already do whatever there is to be done with Maybes.
- you can also manipulate the values inside the
Maybe
, for example: - if you have a
Maybe Person
andPerson
has aname
which isText
, you can apply a function that turnsMaybe Person
intoMaybe Text
AND ONLY THEN you apply the default value (which would be something like the"unnamed"
) and take the name from inside theMaybe
.
- basically these things (
Maybe
,Either
,IO
also!) are just tags. they tag the value, and you can do things with the values inside them, or you can remove the values.- besides the example above with Maybes and the
fromMaybe
function, you can also remove the values by usingcase
-- for example: case x of
Left error -> error
Right success -> success
case y of
Nothing -> "nothing!"
Just value -> value
- (in some cases I believe you can't remove the values, but in these cases you'll also don't need to)
- for example, for values tagged with the IO, you can't remove the IO and turn these values into pure values, but you don't need that, you can just take the value from the outside world, so it's a IO Text, apply functions that modify that value inside IO, then output the result to the user -- this is enough to make a complete program, any complete program.
- besides the example above with Maybes and the
- JSON and interfaces (or instances?)
- using Aeson is easy, you just have to implement the
ToJSON
andFromJSON
interfaces. - "interface" is not the correct name, but I don't care.
ToJSON
, for example, requires a function namedtoJSON
, so you doinstance ToJSON YourType where
toJSON (YourType your type values) = object []
... etc.
- I believe lots of things require interface implementation like this and it can be confusing, but once you know the mystery of implementing functions for interfaces everything is solved.
FromJSON
is a little less intuitive at the beggining, and I don't know if I did it correctly, but it is working here. Anyway, if you're trying to do that, I can only tell you to follow the types, copy examples from other places on the internet and don't care about the meaning of symbols.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28O caso da Grêmio TV
enquanto vinha se conduzindo pela plataforma superior daquela arena que se pensava totalmente preenchida por adeptos da famosa equipe do Grêmio de Porto Alegre, viu-se, como por obra de algum nigromante - dos muitos que existem e estão a todo momento a fazer más obras e a colocar-se no caminhos dos que procuram, se não fazer o bem acima de todas as coisas, a pelo menos não fazer o mal no curso da realização dos seus interesses -, o discretíssimo jornalista a ser xingado e moído em palavras por uma horda de malandrinos a cinco ou seis passos dele surgida que cantavam e moviam seus braços em movimentos que não se pode classificar senão como bárbaros, e assim cantavam:
Grêmio TV pior que o SBT !
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Why IPFS cannot work, again
Imagine someone comes up with a solution for P2P content-addressed data-sharing that involves storing all the files' contents in all computers of the network. That wouldn't work, right? Too much data, if you think this can work then you're a BSV enthusiast.
Then someone comes up with the idea of not storing everything in all computers, but only some things on some computers, based on some algorithm to determine what data a node would store given its pubkey or something like that. Still wouldn't work, right? Still too much data no matter how much you spread it, but mostly incentives not aligned, would implode in the first day.
Now imagine someone says they will do the same thing, but instead of storing the full contents each node would only store a pointer to where each data is actually available. Does that make it better? Hardly so. Still, you're just moving the problem.
This is IPFS.
Now you have less data on each computer, but on a global scale that is still a lot of data.
No incentives.
And now you have the problem of finding the data. First if you have some data you want the world to access you have to broadcast information about that, flooding the network -- and everybody has to keep doing this continuously for every single file (or shard of file) that is available.
And then whenever someone wants some data they must find the people who know about that, which means they will flood the network with requests that get passed from peer to peer until they get to the correct peer.
The more you force each peer to store the worse it becomes to run a node and to store data on behalf of others -- but the less your force each peer to store the more flooding you'll have on the global network, and the slower will be for anyone to actually get any file.
But if everybody just saves everything to Infura or Cloudflare then it works, magic decentralized technology.
Related
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A list of things artificial intelligence is not doing
If AI is so good why can't it:
- write good glue code that wraps a documented HTTP API?
- make good translations using available books and respective published translations?
- extract meaningful and relevant numbers from news articles?
- write mathematical models that fit perfectly to available data better than any human?
- play videogames without cheating (i.e. simulating human vision, attention and click speed)?
- turn pure HTML pages into pretty designs by generating CSS
- predict the weather
- calculate building foundations
- determine stock values of companies from publicly available numbers
- smartly and automatically test software to uncover bugs before releases
- predict sports matches from the ball and the players' movement on the screen
- continuously improve niche/local search indexes based on user input and and reaction to results
- control traffic lights
- predict sports matches from news articles, and teams and players' history
This was posted first on Twitter.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Who will build the roads?
Who will build the roads? Em Lagoa Santa, as mais novas e melhores ruas -- que na verdade acabam por formar enormes teias de bairros que se interligam -- são construídas pelos loteadores que querem as ruas para que seus lotes valham mais -- e querem que outras pessoas usem as ruas também. Também são esses mesmos loteadores que colocam os postes de luz e os encanamentos de água, não sem antes terem que se submeter a extorsões de praxe praticadas por COPASA e CEMIG.
Se ao abrir um loteamento, condomínio, prédio um indivíduo ou uma empresa consegue sem muito problema passar rua, eletricidade, água e esgoto, por que não seria possível existir livre-concorrência nesses mercados? Mesmo aquela velha estória de que é ineficiente passar cabos de luz duplicados para que companhias elétricas possam competir já me parece bobagem.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The "Drivechain will replace altcoins" argument
The argument that Drivechain will replace shitcoins is not that people will sell their shitcoins or that the existing shitcoins will instantly vanish. The argument is about a change at the margin that eventually ends up killing the shitcoins or reducing them to their original insignificance.
What does "at the margin" mean? For example, when the price of the coconut drops a little in relation to bananas, does that mean that everybody will stop buying bananas and will buy only coconuts now? No. Does it mean there will be zero increase in the amount of coconuts sold? Also no. What happens is that there is a small number of people who would have preferred to buy coconuts if only they were a little less expensive but end up buying bananas instead. When the price of coconut drops these people buy coconuts and don't buy bananas.
The argument is that the same thing will happen when Drivechain is activated: there are some people today (yes, believe me) that would have preferred to work within the Bitcoin ecosystem but end up working on shitcoins. In a world with Drivechain these people would be working on the Bitcoin ecosystem, for the benefit of Bitcoin and the Bitcoiners.
Why would they prefer Bitcoin? Because Bitcoin has a bigger network-effect. When these people come, they increase Bitocin's network-effect even more, and if they don't go to the shitcoins they reduce the shitcoins' network-effect. Those changes in network-effect contribute to bringing others who were a little further from the margin and the thing compounds until the shitcoins are worthless.
Who are these people at the margin? I don't know, but they certainly exist. I would guess the Stark people are one famous example, but there are many others. In the past, examples included Roger Ver, Zooko Wilcox, Riccardo Spagni and Vitalik Buterin. And before you start screaming that these people are shitcoiners (which they are) imagine how much bigger Bitcoin could have been today if they and their entire communities (yes, I know, of awful people) were using and working for Bitcoin today. Remember that phrase about Bitcoin being for enemies?
But everything that has been invented in the altcoin world is awful, we don't need any of that!
You and me should not be the ones judging what is good and what is not for others, but both you and me and others will benefit if these things can be done in a way that increases Bitcoin network-effect and pays fees to Bitcoin miners.
Also, there is a much stronger point you may have not considered: if you believe all altcoiners are scammers that means we have only seen the things that were invented by scammers, since all honest people that had good ideas decided to not implement them as the only way to do it would be to create a scammy shitcoin. One example is Bitcoin Hivemind.
If it is possible to do these ideas without creating shitcoins we may start to see new things that are actually good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28How IPFS is broken
I once fell for this talk about "content-addressing". It sounds very nice. You know a certain file exists, you know there are probably people who have it, but you don't know where or if it is hosted on a domain somewhere. With content-addressing you can just say "start" and the download will start. You don't have to care.
Other magic properties that address common frustrations: webpages don't go offline, links don't break, valuable content always finds its way, other people will distribute your website for you, any content can be transmitted easily to people near you without anyone having to rely on third-party centralized servers.
But you know what? Saying a thing is good doesn't automatically make it possible and working. For example: saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.
And what is the solution for that? A DHT!
DHT?
Turns out DHTs have terrible incentive structure (as you would expect, no one wants to hold and serve data they don't care about to others for free) and the IPFS experience proves it doesn't work even in a small network like the IPFS of today.
If you have run an IPFS client you'll notice how much it clogs your computer. Or maybe you don't, if you are very rich and have a really powerful computer, but still, it's not something suitable to be run on the entire world, and on web pages, and servers, and mobile devices. I imagine there may be a lot of unoptimized code and technical debt responsible for these and other problems, but the DHT is certainly the biggest part of it. IPFS can open up to 1000 connections by default and suck up all your bandwidth -- and that's just for exchanging keys with other DHT peers.
Even if you're in the "client" mode and limit your connections you'll still get overwhelmed by connections that do stuff I don't understand -- and it makes no sense to run an IPFS node as a client, that defeats the entire purpose of making every person host files they have and content-addressability in general, centralizes the network and brings back the dichotomy client/server that IPFS was created to replace.
Connections?
So, DHTs are a fatal flaw for a network that plans to be big and interplanetary. But that's not the only problem.
Finding content on IPFS is the most slow experience ever and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with
scp
-- that's considering that IPFS managed to find the other machine, otherwise your command will just be stuck for days.Now even if you ignore that IPFS objects should be content-addressable and not location-addressable and, knowing which peer has the content you want, you go there and explicitly tell IPFS to connect to the peer directly, maybe you can get some seconds of (slow) download, but then IPFS will drop the connection and the download will stop. Sometimes -- but not always -- it helps to add the peer address to your bootstrap nodes list (but notice this isn't something you should be doing at all).
IPFS Apps?
Now consider the kind of marketing IPFS does: it tells people to build "apps" on IPFS. It sponsors "databases" on top of IPFS. It basically advertises itself as a place where developers can just connect their apps to and all users will automatically be connected to each other, data will be saved somewhere between them all and immediately available, everything will work in a peer-to-peer manner.
Except it doesn't work that way at all. "libp2p", the IPFS library for connecting people, is broken and is rewritten every 6 months, but they keep their beautiful landing pages that say everything works magically and you can just plug it in. I'm not saying they should have everything perfect, but at least they should be honest about what they truly have in place.
It's impossible to connect to other people, after years there's no js-ipfs and go-ipfs interoperability (and yet they advertise there will be python-ipfs, haskell-ipfs, whoknowswhat-ipfs), connections get dropped and many other problems.
So basically all IPFS "apps" out there are just apps that want to connect two peers but can't do it manually because browsers and the IPv4/NAT network don't provide easy ways to do it and WebRTC is hard and requires servers. They have nothing to do with "content-addressing" anything, they are not trying to build "a forest of merkle trees" nor to distribute or archive content so it can be accessed by all. I don't understand why IPFS has changed its core message to this "full-stack p2p network" thing instead of the basic content-addressable idea.
IPNS?
And what about the database stuff? How can you "content-address" a database with values that are supposed to change? Their approach is to just save all values, past and present, and then use new DHT entries to communicate what are the newest value. This is the IPNS thing.
Apparently just after coming up with the idea of content-addressability IPFS folks realized this would never be able to replace the normal internet as no one would even know what kinds of content existed or when some content was updated -- and they didn't want to coexist with the normal internet, they wanted to replace it all because this message is more bold and gets more funding, maybe?
So they invented IPNS, the name system that introduces location-addressability back into the system that was supposed to be only content-addressable.
And how do they manage to do it? Again, DHTs. And does it work? Not really. It's limited, slow, much slower than normal content-addressing fetches, most of the times it doesn't even work after hours. But still although developers will tell it is not working yet the IPFS marketing will talk about it as if it was a thing.
Archiving content?
The main use case I had for IPFS was to store content that I personally cared about and that other people might care too, like old articles from dead websites, and videos, sometimes entire websites before they're taken down.
So I did that. Over many months I've archived stuff on IPFS. The IPFS API and CLI don't make it easy to track where stuff are. The
pin
command doesn't help as it just throws your pinned hash in a sea of hashes and subhashes and you're never able to find again what you have pinned.The IPFS daemon has a fake filesystem that is half-baked in functionality but allows you to locally address things by names in a tree structure. Very hard to update or add new things to it, but still doable. It allows you to give names to hashes, basically. I even began to write a wrapper for it, but suddenly after many weeks of careful content curation and distribution all my entries in the fake filesystem were gone.
Despite not having lost any of the files I did lose everything, as I couldn't find them in the sea of hashes I had in my own computer. After some digging and help from IPFS developers I managed to recover a part of it, but it involved hacks. My things vanished because of a bug at the fake filesystem. The bug was fixed, but soon after I experienced a similar (new) bug. After that I even tried to build a service for hash archival and discovery, but as all the problems listed above began to pile up I eventually gave up. There were also problems of content canonicalization, the code the IPFS daemon use to serve default HTML content over HTTP, problems with the IPFS browser extension and others.
Future-proof?
One of the core advertised features of IPFS was that it made content future-proof. I'm not sure they used this expression, but basically you have content, you hash that, you get an address that never expires for that content, now everybody can refer to the same thing by the same name. Actually, it's better: content is split and hashed in a merkle-tree, so there's fine-grained deduplication, people can store only chunks of files and when a file is to be downloaded lots of people can serve it at the same time, like torrents.
But then come the protocol upgrades. IPFS has used different kinds of hashing algorithms, different ways to format the hashes, and will change the default algorithm for building the merkle-trees, so basically the same content now has a gigantic number of possible names/addresses, which defeats the entire purpose, and yes, files hashed using different strategies aren't automagically compatible.
Actually, the merkle algorithm could have been changed by each person on a file-by-file basis since the beginning (you could for example split a book file by chapter or page instead of by chunks of bytes) -- although probably no one ever did that. I know it's not easy to come up with the perfect hashing strategy in the first go, but the way these matters are being approached make me wonder that IPFS promoters aren't really worried about future-proof, or maybe we're just in Beta phase forever.
Ethereum?
This is also a big problem. IPFS is built by Ethereum enthusiasts. I can't read the mind of people behind IPFS, but I would imagine they have a poor understanding of incentives like the Ethereum people, and they tend towards scammer-like behavior like getting a ton of funds for investors in exchange for promises they don't know they can fulfill (like Filecoin and IPFS itself) based on half-truths, changing stuff in the middle of the road because some top-managers decided they wanted to change (move fast and break things) and squatting fancy names like "distributed web".
The way they market IPFS (which is not the main thing IPFS was initially designed to do) as a "peer-to-peer cloud" is very seductive for Ethereum developers just like Ethereum itself is: as a place somewhere that will run your code for you so you don't have to host a server or have any responsibility, and then Infura will serve the content to everybody. In the same vein, Infura is also hosting and serving IPFS content for Ethereum developers these days for free. Ironically, just like the Ethereum hoax peer-to-peer money, IPFS peer-to-peer network may begin to work better for end users as things get more and more centralized.
More about IPFS problems:
- IPFS problems: Too much immutability
- IPFS problems: General confusion
- IPFS problems: Shitcoinery
- IPFS problems: Community
- IPFS problems: Pinning
- IPFS problems: Conceit
- IPFS problems: Inefficiency
- IPFS problems: Dynamic links
See also
- A crappy course on torrents, on the protocol that has done most things right
- The Tragedy of IPFS in a series of links, an ongoing Twitter thread.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Classless Templates
There are way too many hours being wasted in making themes for blogs. And then comes a new blog framework, it requires new themes. Old themes can't be used because they relied on different ways of rendering the website. Everything is a mess.
Classless was an attempt at solving it. It probably didn't work because I wasn't the best person to make themes and showcase the thing.
Basically everybody would agree on a simple HTML template that could fit blogs and simple websites very easily. Then other people would make pure-CSS themes expecting that template to be in place.
No classes were needed, only a fixed structure of
header
.main
,article
etc.With flexbox and grid CSS was enough to make this happen.
The templates that were available were all ported by me from other templates I saw on the web, and there was a simple one I created for my old website.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Um algoritmo imbecil da evolução
Suponha que você queira escrever a palavra BANANA partindo de OOOOOO e usando só alterações aleatórias das letras. As alterações se dão por meio da multiplicação da palavra original em várias outras, cada uma com uma mudança diferente.
No primeiro período, surgem BOOOOO e OOOOZO. E então o ambiente decide que todas as palavras que não começam com um B estão eliminadas. Sobra apenas BOOOOO e o algoritmo continua.
É fácil explicar conceber a evolução das espécies acontecendo dessa maneira, se você controlar sempre a parte em que o ambiente decide quem vai sobrar.
Porém, há apenas duas opções:
- Se o ambiente decidir as coisas de maneira aleatória, a chance de você chegar na palavra correta usando esse método é tão pequena que pode ser considerada nula.
- Se o ambiente decidir as coisas de maneira pensada, caímos no //design inteligente//.
Acredito que isso seja uma enunciação decente do argumento "no free lunch" aplicado à crítica do darwinismo por William Dembski.
A resposta darwinista consiste em dizer que não existe essa BANANA como objetivo final. Que as palavras podem ir se alterando aleatoriamente, e o que sobrar sobrou, não podemos dizer que um objetivo foi atingido ou deixou de sê-lo. E aí os defensores do design inteligente dirão que o resultado ao qual chegamos não pode ter sido fruto de um processo aleatório. BANANA é qualitativamente diferente de AYZOSO, e aí há várias maneiras de "provar" que sim usando modelos matemáticos e tal.
Fico com a impressão, porém, de que essa coisa só pode ser resolvida como sim ou não mediante uma discussão das premissas, e chega um ponto em que não há mais provas matemáticas possíveis, apenas subjetividade.
Daí eu me lembro da minha humilde solução ao problema do cão que aperta as teclas aleatoriamente de um teclado e escreve as obras completas de Shakespeare: mesmo que ele o faça, nada daquilo terá sentido sem uma inteligência de tipo humano ali para lê-las e perceber que não se trata de uma bagunça, mas sim de um texto com sentido para ele. O milagre se dá não no momento em que o cão tropeça no teclado, mas no momento em que o homem olha para a tela.
Se o algoritmo da evolução chegou à palavra BANANA ou UXJHTR não faz diferença pra ela, mas faz diferença para nós, que temos uma inteligência humana, e estamos observando aquilo. O homem também pensaria que há //algo// por trás daquele evento do cão que digita as obras de Shakespeare, e como seria possível alguém em sã consciência pensar que não?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte
Ao descer de seu ônibus na rodoviária de Belo Horizonte às 4 e pouco da manhã, darás de frente para um caubói que toma cerveja em seus trajes típicos em um bar no setor mesmo de desembarque. Suba a escada à direita que dá no estacionamento da rodoviária. Vire à esquerda e caminhe por mais ou menos 400 metros, atravessando uma área onde pessoas suspeitas -- mas provavelmente dormindo em pé -- lhe observam, e então uma pracinha ocupada por um clã de mendigos. Ao avistar um enorme obelisco no meio de um cruzamento de duas avenidas, vire à esquerda e caminhe por mais 400 metros. Você verá uma enorme, antiga e bela estação com uma praça em frente, com belas fontes aqüáticas. Corra dali e dirija-se a um pedaço de rua à direita dessa praça. Um velho palco de antigos carnavais estará colocado mais ou menos no meio da simpática ruazinha de parelepípedos: é onde você pegará seu próximo ônibus.
Para entrar na estação é necessário ter um cartão com créditos recarregáveis. Um viajante prudente deixa sempre um pouco de créditos em seu cartão a fim de evitar filas e outros problemas de indisponibilidade quando chega cansado de viagem, com pressa ou em horários incomuns. Esse tipo de pessoa perceberá que foi totalmente ludibriado ao perceber que que os créditos do seu cartão, abastecido quando de sua última vinda a Belo Horizonte, há três meses, pereceram de prazo de validade e foram absorvidos pelos cofre públicos. Terá, portanto, que comprar mais créditos. O guichê onde os cartões são abastecidos abre às 5h, mas não se espante caso ele não tenha sido aberto ainda quando o primeiro ônibus chegar, às 5h10.
Com alguma sorte, um jovem de moletom, autorizado por dois ou três fiscais do sistema de ônibus que conversam alegremente, será o operador da catraca. Ele deixa entrar sem pagar os bêbados, os malandros, os pivetes. Bastante empático e perceptivo do desespero dos outros, esse bom rapaz provavelmente também lhe deixará entrar sem pagar.
Uma vez dentro do ônibus, não se intimide com os gritalhões e valentões que, ofendidíssimos com o motorista por ele ter parado nas estações, depois dos ônibus anteriores terem ignorado esses excelsos passageiros que nelas aguardavam, vão aos berros tirar satisfação.
O ponto final do ônibus, 40 minutos depois, é o terminal Morro Alto. Lá você verá, se procurar bem entre vários ônibus e pessoas que despertam a sua mais honesta suspeita, um veículo escuro, apagado, numerado 5882 e que abrigará em seu interior um motorista e um cobrador que descansam o sono dos justos.
Aguarde na porta por mais uns vinte minutos até que, repentinamente desperto, o motorista ligue o ônibus, abra as portas e já comece, de leve, a arrancar. Entre correndo, mas espere mais um tempo, enquanto as pessoas que têm o cartão carregado passem e peguem os melhores lugares, até que o cobrador acorde e resolva te cobrar a passagem nesse velho meio de pagamento, outrora o mais líqüído, o dinheiro.
Este último ônibus deverá levar-lhe, enfim, a Lagoa Santa.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28OP_CHECKTEMPLATEVERIFY
and the "covenants" dramaThere are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Jofer
Jofer era um jogador diferente. À primeira vista não, parecia igual, um volante combativo, perseguia os atacantes adversários implacavelmente, um bom jogador. Mas não era essa a característica que diferenciava Jofer. Jofer era, digamos, um chutador.
Começou numa semifinal de um torneio de juniores. O time de Jofer precisava do empate e estava sofrendo uma baita pressão do adversário, mas o jogo estava 1 a 1 e parecia que ia ficar assim mesmo, daquele jeito futebolístico que parece, parece mesmo. Só que aos 46 do segundo tempo tomaram um gol espírita, Ruizinho do outro time saiu correndo pela esquerda e, mesmo sendo canhoto, foi cortando para o meio, os zagueiros meio que achando que já tinha acabado mesmo, devia ter só mais aquele lance, o árbitro tinha dado dois minutos, Ruizinho chutou, marcou e o goleiro, que só pulou depois que já tinha visto que não ia ter jeito, ficou xingando.
A bola saiu do meio e tocaram para Jofer, ninguém nem veio marcá-lo, o outro time já estava comemorando, e com razão, o juiz estava de sacanagem em fazer o jogo continuar, já estava tudo acabado mesmo. Mas não, estava certo, mais um minuto de acréscimo, justo. Em um minuto dá pra fazer um gol. Mas como? Jofer pensou nas partidas da NBA em que com alguns centésimos de segundo faltando o armador jogava de qualquer jeito para a cesta e às vezes acertava. De trás do meio de campo, será? Não vou ter nem força pra fazer chegar no gol. Vou virar piada, melhor tocar pro Fumaça ali do lado e a gente perde sem essa humilhação no final. Mas, poxa, e daí? Vou tentar mesmo assim, qualquer coisa eu falo que foi um lançamento e daqui a uns dias todo mundo esquece. Olhou para o próprio pé, virou ele de ladinho, pra fora e depois pra dentro (bom, se eu pegar daqui, direitinho, quem sabe?), jogou a bola pro lado e bateu. A bola subiu escandalosamente, muito alta mesmo, deve ter subido uns 200 metros. Jofer não tinha como ter a menor noção. Depois foi descendo, o goleirão voltando correndo para debaixo da trave e olhando pra bola, foi chegando e pulando já só pra acompanhar, para ver, dependurado no travessão, a bola sair ainda bem alta, ela bateu na rede lateral interna antes de bater no chão, quicar violentamente e estufar a rede no alto do lado direito de quem olhava.
Mas isso tudo foi sonho do Jofer. Sonhou acordado, numa noite em que demorou pra dormir, deitado na sua cama. Ficou pensando se não seria fácil, se ele treinasse bastante, acertar o gol bem de longe, tipo no sonho, e se não dava pra fazer gol assim. No dia seguinte perguntou a Brunildinho, o treinador de goleiros. Era difícil defender essas bolas, ainda mais se elas subissem muito, o goleiro ficava sem perspectiva, o vento alterava a trajetória a cada instante, tinha efeito, ela cairia rápido, mas claro que não valia à pena treinar isso, a chance de acertar o gol era minúscula. Mas Jofer só ia tentar depois que treinasse bastante e comprovasse o que na sua imaginação parecia uma excelente idéia.
Começou a treinar todos os dias. Primeiro escondido, por vergonha dos colegas, chegava um pouco antes e ficava lá, chutando do círculo central. Ao menor sinal de gente se aproximando, parava e ia catar as bolas. Depois, quando começou a acertar, perdeu a vergonha. O pessoal do clube todo achava engraçado quando via Jofer treinando e depois ouvia a explicação da boca de alguém, ninguém levava muito a sério, mas também não achava de todo ridículo. O pessoal ria, mas no fundo torcia praquilo dar certo, mesmo.
Aconteceu que num jogo que não valia muita coisa, empatezinho feio, aos 40 do segundo tempo, a marcação dos adversários já não estava mais pressionando, todo mundo contente com o empate e com vontade de parar de jogar já, o Henrique, meia-esquerdo, humilde, mas ainda assim um pouco intimidante para Jofer (jogava demais), tocou pra ele. Vai lá, tenta sua loucura aí. Assumiu a responsabilidade do nosso volante introspectivo. Seria mais verossímil se Jofer tivesse errado, primeira vez que tentou, restava muito tempo ainda pra ele ter a chance de ser herói, ninguém acerta de primeira, mas ele acertou. Quase como no sonho, Lucas, o goleiro, não esperava, depois que viu o lance, riu-se, adiantou-se para pegar a bola que ele julgava que quicaria na área, mas ela foi mais pra frente, mais e mais, daí Lucas já estava correndo, só que começou a pensar que ela ia pra fora, e ele ia só se dependurar no travessão e fazer seu papel de estar na bola. Acabou que por conta daquele gol eles terminaram em segundo no grupo daquele torneiozinho, ao invés de terceiro, e não fez diferença nenhuma.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Per Bylund's insight
The firm doesn't exist because, like Coase said, it is inefficient to operate in a fully open-market and production processes need some bubbles of central planning.
Instead, what happens is that a firm is created because an entrepreneur is doing a new thing (and here I imagine that doing an old thing in a new context also counts as doing a new thing, but I didn't read his book), and for that new thing there is no market, there are no specialized workers offering the services needed, nor other businesses offering the higher-order goods that entrepreneur wants, so he must do all by himself.
So the entrepreneur goes and hires workers and buys materials more generic than he wanted and commands these to build what he wants exactly. It is less efficient than if he could buy the precise services and goods he wanted and combine those to yield the product he envisaged, but it accomplishes the goal.
Later, when that specific market evolves, it's natural that specialized workers and producers of the specific factors begin to appear, and the market gets decentralized.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28My personal experience (as a complete ignorant) of the blocksize debate in 2017
In the beginning of 2017 I didn't know Bitcoin was having a "blocksize debate". I had stopped paying attention to Bitcoin in 2014 after reading Tim Swanson's book on shitcoineiry and was surprise people even care about Bitcoin still while Ethereum and other fancy things were around.
My introduction to the subject was this interview with Andrew Stone and Andrew Clifford from Bitcoin Unlimited (still don't know who these guys are). I've listened to it and kinda liked the conspiracy theory about "a group of developers trying, against miners and users, to control the whole ecosystem by not allowing blocks to grow" (actually, if you listen to this interview that announced the creation of Blockstream and the sidechains whitepaper it does sound like a government agent bribing all the Core developers into forming a consortium that will turn Bitcoin into an Ethereum-like shitcoin under their control -- but this is just a useless digression).
Some time later I listened to this interview with Jimmy Song and was introduced to two hard forks and conspiracies and New York Agreement and got excited because I didn't care about Bitcoin (I'm ashamed to remember this feeling) and wanted to see things changing, people fighting, Bitcoin burning, for no reason. Oddly, what I grasped from the interview was that Jimmy Song was defending the agreement and expecting everybody to fulfill it.
When the day actually come and "Bitcoin Cash" forked I looked at it with pity because it looked clearly a failure from the beginning, but I still cheered for it a bit, still not knowing anything about the debate, besides the fact that blocks were bigger on BCH, which looked like a very reductionist explanation to me.
"Of course it's not just making blocks bigger, that would be too simple, they probably have a very complex plan I'm not apt to understand", I thought.
To my surprise the entire argument was actually just that: bigger blocks bigger blocks. I came to that conclusion by listening to tomwoods.com/1064, a debate in which reasonable arguments faced childish claims. That debate gave me perspective and was a clear, undisputed win from Jameson Lopp against Roger Ver.
Actually some time before that I had listened to another Tom Woods Show episode thinking it was going to be an episode about Bitcoin, but in fact it was just propaganda about a debate I had almost forgotten. And nothing about Bitcoin, everything about "Bitcoin Cash" and how there were two Bitcoins, one legitimate and the other unlegitimate.
So, from the perspective of someone that came to the debate totally fresh and only listens to the big-blocker arguments for a long time, they still don't convince anyone with some common sense (as I would like to think of myself), they just sound like mad dogs and everything goes against themselves.
Fast forward to the present and with much more understanding of the issues in place I started digging some material from 2016-2017 about the debate to try to get more context, and found this ridiculous interview with Mike Hearn. It isn't a waste of time to listen to it if you're not familiar with the debate from that time.
As I should have probably expected from my experience with Epicenter.tv, both the interviewers agree with Mike Hearn about his ridiculous claims about how (not his words) we have to subsidize the few thousand current Bitcoin users by preventing fees from increase and there are no trade-offs to doing that -- and even with everybody agreeing they all manage to sound stupid. There's not a single phrase that is defendable in the entire interview, no criticisms make any sense, it makes me feel bad for the the guy as he feels so self-assured and obviouslyright.
After knowing about these and other adventures of stupid people with high influences in the Bitcoin world trying to impose their idiocy on others it feels even more odd and unexpected to find Bitcoin in the right track. Generally in politics the most dumb wins, but apparently not in Bitcoin.
Bitcoin is a miracle.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28IPFS problems: Conceit
IPFS is trying to do many things. The IPFS leaders are revolutionaries who think they're smarter than the rest of the entire industry.
The fact that they've first proposed a protocol for peer-to-peer distribution of immutable, content-addressed objects, then later tried to fix that same problem using their own half-baked solution (IPNS) is one example.
Other examples are their odd appeal to decentralization in a very non-specific way, their excessive flirtation with Ethereum and their never-to-be-finished can-never-work-as-advertised Filecoin project.
They could have focused on just making the infrastructure for distribution of objects through hashes (not saying this would actually be a good idea, but it had some potential) over a peer-to-peer network, but in trying to reinvent the entire internet they screwed everything up.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
This is needed because other solutions are broken:
The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of ECMA-262 6th Edition;
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
How does Nostr work?
- There are two components: clients and relays. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
How does it solve the problems the networks above can't?
- Users getting banned and servers being closed
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
-
All of the above is valid too for when a relay ceases its operations.
-
Censorship-resistance
- Each user can publish their updates to any number of relays.
-
A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
-
Spam
-
If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
-
Data storage
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
-
Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
-
Video and other heavy content
-
It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
-
Techniques to trick the user
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
FAQ
- This is very simple. Why hasn't anyone done it before?
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- How do I find people to follow?
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- How do I find relays? What happens if I'm not connected to the same relays someone else is?
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- Can I know how many people are following me?
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- What incentive is there for people to run relays?
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The flaw of "just use paypal/coinbase" arguments
For the millionth time I read somewhere that "custodial bitcoin is not bitcoin" and that "if you're going to use custodial, better use Paypal". No, actually it was "better use Coinbase", but I had heard the "PayPal" version in the past.
There are many reasons why using PayPal is not the same as using a custodial Bitcoin service or wallet that are obvious and not relevant here, such as the fact that you can't have Bitcoin balances on Bitcoin (or maybe now you can? but you can't send it around); plus all the reasons that are also valid for Coinbase such as you having to give all your data and selfies of yourself and your government documents and so on -- but let's ignore these reasons for now.
The most important reason why it isn't the same thing is that when you're using Coinbase you are stuck in Coinbase. Your Coinbase coins cannot be used to pay anyone that isn't in Coinbase. So Coinbase-style custodianship doesn't help Bitcoin. If you want to move out of Coinbase you have to withdraw from Coinbase.
Custodianship on Lightning is of a very different nature. You can pay people from other custodial platforms and people that are hosting their own Lightning nodes and so on.
That kind of custodianship doesn't do any harm to anyone, doesn't fracture the network, doesn't reduce the network effect of Lightning, in fact it increases it.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Per-paragraph paywalls
Using the lnurl-allowance protocol, a website could instead of putting a paywall over the entire site, charge a reader for only the paragraphs they read. Of course this requires trust from the reader on the website, but this is normal. The website could just hide the rest of the article before an invoice from the paragraph just read was paid.
This idea came from Colin from the Unhashed Podcast.
Could also work with podcasts and videos.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Replacing the web with something saner
This is a simplification, but let's say that basically there are just 3 kinds of websites:
- Websites with content: text, images, videos;
- Websites that run full apps that do a ton of interactive stuff;
- Websites with some interactive content that uses JavaScript, or "mini-apps";
In a saner world we would have 3 different ways of serving and using these. 1 would be "the web" (and it was for a while, although I'm not claiming here that the past is always better and wanting to get back to the glorious old days).
1 would stay as "the web", just static sites, styled with CSS, no JavaScript whatsoever, but designers can still thrive and make they look pretty. Or it could also be something like Gemini. Maybe the two protocols could coexist.
2 would be downloadable native apps, much easier to write and maintain for developers (considering that multi-platform and cross-compilation is easy today and getting easier), faster, more polished experience for users, more powerful, integrates better with the computer.
(Remember that since no one would be striving to make the same app run both on browsers and natively no one would have any need for Electron or other inefficient bloated solutions, just pure native UI, like the Telegram app, have you seen that? It's fast.)
But 2 is mostly for apps that people use every day, something like Google Docs, email (although email is also broken technology), Netflix, Twitter, Trello and so on, and all those hundreds of niche SaaS that people pay monthly fees to use, each tailored to a different industry (although most of functions they all implement are the same everywhere). What do we do with dynamic open websites like StackOverflow, for example, where one needs to not only read, but also search and interact in multiple ways? What about that website that asks you a bunch of questions and then discovers the name of the person you're thinking about? What about that mini-app that calculates the hash of your provided content or shrinks your video, or that one that hosts your image without asking any questions?
All these and tons of others would fall into category 3, that of instantly loaded apps that you don't have to install, and yet they run in a sandbox.
The key for making category 3 worth investing time into is coming up with some solid grounds, simple enough that anyone can implement in multiple different ways, but not giving the app too much choices.
Telegram or Discord bots are super powerful platforms that can accomodate most kinds of app in them. They can't beat a native app specifically made with one purpose, but they allow anyone to provide instantly usable apps with very low overhead, and since the experience is so simple, intuitive and fast, users tend to like it and sometimes even pay for their services. There could exist a protocol that brings apps like that to the open world of (I won't say "web") domains and the websockets protocol -- with multiple different clients, each making their own decisions on how to display the content sent by the servers that are powering these apps.
Another idea is that of Alan Kay: to design a nice little OS/virtual machine that can load these apps and run them. Kinda like browsers are today, but providing a more well-thought, native-like experience and framework, but still sandboxed. And I add: abstracting away details about design, content disposition and so on.
These 3 kinds of programs could coexist peacefully. 2 are just standalone programs, they can do anything and each will be its own thing. 1 and 3, however, are still similar to browsers of today in the sense that you need clients to interact with servers and show to the user what they are asking. But by simplifying everything and separating the scopes properly these clients would be easy to write, efficient, small, the environment would be open and the internet would be saved.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28SummaDB
This was a hierarchical database server similar to the original Firebase. Records were stored on a LevelDB on different paths, like:
/fruits/banana/color
:yellow
/fruits/banana/flavor
:sweet
And could be queried by path too, using HTTP, for example, a call to
http://hostname:port/fruits/banana
, for example, would return a JSON document likejson { "color": "yellow", "flavor": "sweet" }
While a call to
/fruits
would returnjson { "banana": { "color": "yellow", "flavor": "sweet" } }
POST
,PUT
andPATCH
requests also worked.In some cases the values would be under a special
"_val"
property to disambiguate them from paths. (I may be missing some other details that I forgot.)GraphQL was also supported as a query language, so a query like
graphql query { fruits { banana { color } } }
would return
{"fruits": {"banana": {"color": "yellow"}}}
.SummulaDB
SummulaDB was a browser/JavaScript build of SummaDB. It ran on the same Go code compiled with GopherJS, and using PouchDB as the storage backend, if I remember correctly.
It had replication between browser and server built-in, and one could replicate just subtrees of the main tree, so you could have stuff like this in the server:
json { "users": { "bob": {}, "alice": {} } }
And then only allow Bob to replicate
/users/bob
and Alice to replicate/users/alice
. I am sure the require auth stuff was also built in.There was also a PouchDB plugin to make this process smoother and data access more intuitive (it would hide the
_val
stuff and allow properties to be accessed directly, today I wouldn't waste time working on these hidden magic things).The computed properties complexity
The next step, which I never managed to get fully working and caused me to give it up because of the complexity, was the ability to automatically and dynamically compute materialized properties based on data in the tree.
The idea was partly inspired on CouchDB computed views and how limited they were, I wanted a thing that would be super powerful, like, given
json { "matches": { "1": { "team1": "A", "team2": "B", "score": "2x1", "date": "2020-01-02" }, "1": { "team1": "D", "team2": "C", "score": "3x2", "date": "2020-01-07" } } }
One should be able to add a computed property at
/matches/standings
that computed the scores of all teams after all matches, for example.I tried to complete this in multiple ways but they were all adding much more complexity I could handle. Maybe it would have worked better on a more flexible and powerful and functional language, or if I had more time and patience, or more people.
Screenshots
This is just one very simple unfinished admin frontend client view of the hierarchical dataset.
- https://github.com/fiatjaf/summadb
- https://github.com/fiatjaf/summuladb
- https://github.com/fiatjaf/pouch-summa
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28"House" dos economistas e o Estado
Falta um gênio pra produzir um seriado tipo House só que com economistas. O House do seriado seria um austríaco é o "everybody lies" seria uma premissa segundo a qual o Estado é sempre a causa de todos os problemas.
Situações bem cabeludas poderiam ser apresentadas de maneira que parecesse muito que a causa era ganância ou o mau-caratismo dos agentes, mas na investigação quase sempre se descobriria que a causa era o Estado.
Parece ridículo, mas se eu descrevesse House assim aqui também pareceria. A execução é que importa.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28notes on "Economic Action Beyond the Extent of the Market", Per Bylund
Source: https://www.youtube.com/watch?v=7St6pCipCB0
Markets work by dividing labour, but that's not as easy as it seems in the Adam Smith's example of a pin factory, because
- a pin factory is not a market, so there is some guidance and orientation, some sort of central planning, inside there that a market doesn't have;
- it is not clear how exactly the production process will be divided, it is not obvious as in "you cut the thread, I plug the head".
Dividing the labour may produce efficiency, but it also makes each independent worker in the process more fragile, as they become dependent on the others.
This is partially solved by having a lot of different workers, so you do not depend on only one.
If you have many, however, they must agree on where one part of the production process starts and where it ends, otherwise one's outputs will not necessarily coincide with other's inputs, and everything is more-or-less broken.
That means some level of standardization is needed. And indeed the market has constant incentives to standardization.
The statist economist discourse about standardization is that only when the government comes with a law that creates some sort of standardization then economic development can flourish, but in fact the market creates standardization all the time. Some examples of standardization include:
- programming languages, operating systems, internet protocols, CPU architectures;
- plates, forks, knifes, glasses, tables, chairs, beds, mattresses, bathrooms;
- building with concrete, brick and mortar;
- money;
- musical instruments;
- light bulbs;
- CD, DVD, VHS formats and others alike;
- services that go into every production process, like lunch services, restaurants, bakeries, cleaning services, security services, secretaries, attendants, porters;
- multipurpose steel bars;
- practically any tool that normal people use and require a little experience to get going, like a drilling machine or a sanding machine; etc.
Of course it is not that you find standardization in all places. Specially when the market is smaller or new, standardization may have not arrived.
There remains the truth, however, that division of labour has the potential of doing good.
More than that: every time there are more than one worker doing the same job in the same place of a division of labour chain, there's incentive to create a new subdivision of labour.
From the fact that there are at least more than one person doing the same job as another in our society we must conclude that someone must come up with an insight about an efficient way to divide the labour between these workers (and probably actually implement it), that hasn't happened for all kinds of jobs.
But to come up with division of labour outside of a factory, some market actors must come up with a way of dividing the labour, actually, determining where will one labour stop and other start (and that almost always needs some adjustments and in fact extra labour to hit the tips), and also these actors must bear the uncertainty and fragility that division of labour brings when there are not a lot of different workers and standardization and all that.
In fact, when an entrepreneur comes with a radical new service to the market, a service that does not fit in the current standard of division of labour, he must explain to his potential buyers what is the service and how the buyer can benefit from it and what he will have to do to adapt its current production process to bear with that new service. That's has happened not long ago with
- services that take food orders from the internet and relay these to the restaurants;
- hostels for cheap accommodation for young travellers;
- Uber, Airbnb, services that take orders and bring homemade food from homes to consumers and similars;
- all kinds of software-as-a-service;
- electronic monitoring service for power generators;
- mining planning and mining planning software; and many other industry-specific services.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Flowi.es
At the time I thought Workflowy had the ideal UI for everything. I wanted to implement my custom app maker on it, but ended up doing this: a platform for enhancing Workflowy with extra features:
- An email reminder based on dates input in items
- A website generator, similar to Websites For Trello, also based on Classless Templates
Also, I didn't remember this was also based on CouchDB and had some couchapp functionalities.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28litepub
A Go library that abstracts all the burdensome ActivityPub things and provides just the right amount of helpers necessary to integrate an existing website into the "fediverse" (what an odious name). Made for the gravity integration.
See also
-
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Ripple and the problem of the decentralized commit
This is about Ryan Fugger's Ripple.
The summary is: unless everybody is good and well-connected at all times a transaction can always be left in a half-committed state, which creates confusion, erodes trust and benefits no one.
If you're unconvinced consider the following protocol flow:
- A finds a route (A--B--C--D) between her and D somehow;
- A "prepares" a payment to B, tells B to do the same with C and so on (to prepare means to give B a conditional IOU that will be valid as long as the full payment completes);
- When the chain of prepared messages reaches D, D somehow "commits" the payment.
- After the commit, A now really does owe B and so on, and D really knows it has been effectively paid by A (in the form of debt from C) so it can ship goods to A.
The most obvious (but wrong) way of structuring this would be for the entire payment chain to be dependent on the reveal of some secret. For example, the "prepare" messages could contain something like "I will pay you as long as you know
p
such thatsha256(p) == h
".The payment flow then starts with D presenting A with an invoice that contains
h
, so D knowsp
, but no one else knows. A can then send the "prepare" message to B and B do the same until it reaches D.When it reaches D, D can be sure that C will pay him because he knows
p
such thatsha256(p) == h
. He then revealsp
to C, C now reveals it to B and B to A. When A gets it it has a proof that D has received his payment, therefore it is happy to settle it later with B and can prove to an external arbitrator that he has indeed paid D in case D doesn't deliver his products.Issues with the naïve flow above
What if D never reveals
p
to C?Then no one knows what happened. And then 10 years later he arrives at C's house (remember they are friends or have a trust relationship somehow) and demands his payment, and shows
p
to her in a piece of paper. Or worse: go directly to the court and shows C's message that says "I will pay you as long as you knowp
such thatsha256(p) == h
" (but with an actual number instead of "h") and the correspondingp
. Now the judge has to decide in favor of D.Now C was supposed to do the same with B, but C is not playing with this anymore, has lost all contact with B after they did their final settlement many years ago, no one was expecting this.
This clearly can't work. There must be a timeout for these payments.
What if we have a timeout?
Now what if we say the payment expires in one hour. D cannot hold the payment hostage and reveal
p
after 10 years. It must either reveal it before the timeout or conditional IOU will be void. Solves everything!Except no, now it's the time we reach the most dark void of the protocol, the flaw that sucks its life into the abyss: subjectivity and ambiguity.
The big issue is that we don't have an independent judge to assert, for example, that D has indeed "revealed"
p
to C in time. C must acknowledge that voluntarily. C could do it using messages over the internet, but these messages are not reliable. C is not reliable. Clocks are not synchronized. Also if we now require C to confirm it has receivedp
from D then the "prepare" message means nothing, as for D now just knowingp
is not enough to claim before an arbitrator that C owes her -- because, again, D also must prove it has shownp
to C before the timeout, therefore it needs a new signed acknowledgement from C, or from some other party.Let's see a few examples.
Subjectivity and perverse incentives
D could send
p
to C, and C acknowledge it, but then when C goes to B and send it B will not acknowledge it, and claim it's past the time. Now C loses money.Maybe C can not acknowledge it received anything from D before checking first with B? But B will have to check with A too! And it subverts the entire flow of the thing. And now A has a "proof of payment" (knowledge of
p
) without even having to acknowledge anything! In this case knowingp
or not becomes meaningless as everybody knowsp
without acknowleding it to anyone else.But even if A is honest and sends an "acknowledge" message to B, now B can just sit quiet and enjoy the credit it has just earned from A without ever acknowleding anything to C. It's perverted incentives in every step.
Ambiguity
But isn't this a protocol based on trust?, you ask, isn't C trusting that B will behave honestly already? Therefore if B is dishonest C just has to acknowledge his loss and break his chain of trust with B.
No, because C will not know what happened. B can say "I could have sent you an acknowledgement, but was waiting for A, and A didn't send anything" and C won't ever know if that was true. Or B could say "what? You didn't send me
p
at all", and that could be true. B could have been offline when A sent it, there could have been a broken connection or many other things, and B continues: "I was waiting for you to present me withp
, but you didn't, therefore the payment timed out, you can't come here withp
now, because now A won't accept it anymore from me". That could be true or could be false, who knows?Therefore it is impossible for trust relationships and reputations to be maintained in such a system without "good fences".[^ln-solution][^ln-issue]
[^ln-solution]: The Lightning Network has a solution for the problem of the decentralized commit. [^ln-issue]: Ironically this same ambiguity problem is being faced by the Lightning Network community when trying to create a reputation/payment system to prevent routing abuses. It seems simple when you first think about it: "let each node manage its own trust", but in fact it is somewhat impossible.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Comprimido desodorante
No episódio sei-lá-qual de Aleixo FM Bruno Aleixo diz que os bêbados sempre têm as melhores idéias e daí conta uma idéia que ele teve quando estava bêbado: um comprimido que funciona como desodorante. Ao invés de passar o desodorante spray ou roll-on a pessoa pode só tomar o comprimido e pronto, é muito mais prático e no tempo de frio a pessoa pode vestir a roupa mais rápido, sem precisar ficar passando nada com o tronco todo nu. Quando o Busto lhe pergunta sobre a possibilidade de algo assim ser fabricado ele diz que não sabe, que não é cientista, só tem as idéias.
Essa passagem tão boba de um programa de humor esconde uma verdade sobre a doutrina cientística que permeia a sociedade. A doutrina segundo a qual é da ciência que vêm as inovações tecnológicas e de todos os tipos, e por isso é preciso que o Estado tire dinheiro das pessoas trabalhadoras e dê para os cientistas. Nesse ponto ninguém mais sabe o que é um cientista, foi-se toda a concretude, ficou só o nome: "cientista". Daí vão procurar o tal cientista, é um cara que se formou numa universidade e está fazendo um mestrado. Pronto, é só dar dinheiro pra esse cara e tudo vai ficar bom.
Tirando o problema da desconexão entre realidade e a tese, existe também, é claro, o problema da tese: não faz sentido, que um cientista fique procurando formas de realizar uma idéia, que não se sabe nem se é possível nem se é desejável, que ele ou outra pessoa tiveram, muito pelo contrário (mas não vou dizer aqui o que é que era para o cientista fazer porque isso seria contraditório e eu não acho que devam nem existir cientistas).
O que eu queria dizer mesmo era: todo o aparato científico da nossa sociedade, todos os departamentos, universidades, orçamentos e bolsas e revistas, tudo se resume a um monte de gente tentando descobrir como fazer um comprimido desodorante.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Mises' interest rate theory
Inspired by Bob Murphy's thesis against the "pure time preference theory" (see also this series of podcasts) -- or blatantly copying it -- here are some thoughts on Mises' most wrong take:
- Mises asserts that the market rate of interest is not the originary rate of interest, because the market rate involves entrepreneurial decisions, risk, uncertainty etc. No one lends money with 100% guarantee that it will be paid back in the market and so. But if that is true, where can we see that originary interest? We're supposed to account for its existence and be sure that it is logically there in every trade between present and future, because it's a category of action. But then it seems odd to me that it has anything to do with the actual interest.
- Mises criticizes the notion of "profit" from classical economists because it mashed together gains deriving from speculation, risk, other stuff and originary interest -- but that's only because he assumes originary interest as a given (because it's a category of action and so on). If he didn't he could have just not cited originary interest in the list of things that give rise to "profit" and all would be fine.
- Mixing the two points above, it seems very odd to think that we should look for interest as a component of profit. It seems indeed to be very classifical-economist take. It would be still compatible with Mises'sworldview -- indeed more compatible -- that we looked for profit as a component of interest: when someone lends some 100 and is paid 110 that is profit. Plain simple. Why he did that and why the other person paid isn't for the economist to analyse, or to dissect the extra 10 into 9 interest, 1 risk remuneration or anything like that. If the borrower hadn't paid it would be a 100 loss or a 109 loss?
- In other moments, Mises talks about the originary rate of interest being the same for all things: apples and bicycles and anything else. But wasn't each person supposed to have its own valuation of each good -- including goods in the present and in the future? Is Mises going to say that it's impossible for someone to value an orange in the future more than a bycicle in the future in comparison with these same goods in the present? (The very "more" in the previous sentence shows us that Mises was incurring in cardinal value calculations when coming up with this theory -- and I hadn't noticed it until after I finished typing the phrase.) In other words: what if someone prefers orange, bycicle, bycicle in the future, orange in the future? That doesn't seem to fit. What is the rate of interest?
- Also, on the point above, what if someone has different rates of interest for goods in different timeframes? For example, someone may prefer a bycicle now a little more than a bycicle tomorrow, but very very much more than a bycicle in two days. That also breaks the notion of "originary interest" as an universal rate.
- Now maybe I misunderstood everything, maybe Mises was talking about originary interest as a rate defined by the market. And he clearly says that. That if the rate of interest is bigger on some market entrepreneurs will invest capital in that one until it equalizes with rates in other markets. But all that fits better with the plain notion of profit than with this poorly-crafted notion of originary interest. If you're up to defining and (Mises forbid?) measuring the neutral rate of interest you'll have to arbitrarily choose some businesses to be part of the "market" while excluding others.
- By the way, wasn't originary interest a category of action? How can a category of action be defined and ultimately fixed by entrepreneurial action in a market?
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Zettelkasten
https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125 (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal
neuron
que também gera sites a partir das notas?, acho que vai ser bom. -
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thafne venceu o Soletrando 2008.
As palavras que Thafne teve que soletrar: "ocioso", "hermético", "glossário", "argênteo", "morfossintaxe", "infra-hepático", "hagiológio". Enquanto isso Eder recebia: "intramuscular", "destilação", "inabitável", "subcutâneo", "homogeneidade", "predecessor", "displicência", "subconsciência", "psicroestesia" (isto segundo o site da folha, donde certamente faltam algumas palavras de Thafne). Sério, "argênteo"? Não é errado dizer que a Globo tentou promover o menino pobre da escola pública do sertão contra a riquinha de Curitiba.
O mais espetacular disto é que deu errado e o Brasil inteiro torceu pela Thafne, o que se verifica com uma simples busca no Google. Eis aqui alguns exemplos:
- O problema de Thafne traz comentários tentando incriminar o governo do Estado de Minas Gerais com a vitória forçada de Eder.
- este vídeo mostrando os erros do programa e a vitória triunfal, embora parcial, de Thafne, traz a brilhante descrição "globo de puleira quis complicar a vida da menina!!!!!!!!!!!!!!!!!!!!!!"
- este vídeo, com o mesmo conteúdo,, porém chamado "Thafne versus Luciano Huck, o confronto do século", tem, além disto, vários comentários de francos torcedores de Thafne:
- "Nossa isso é burrice porq o doutor falou duas vezes como o luciano não prestou atenção logo thafine deu duas patadas no luciano... Proxima luciano presta atenção na pronuncia"
- "ele nao pronunciou errado porque é burro, isso foi pra manipular o resultado"
- "Gabriel o Bostador ficou pianinho. Babaca do krl"
- "Pena que ela perdeu :("
- "verdade... ela que ganhou, o outro só ficou com o título :S"
- "A menina deu um banho nesse que além de idiota é BURRO."
- e muitos, muitos outros.
- Globo Erra e Luciano Huck dá Vexame, um breve artigo descrevendo alguns dos pontos em que Eder foi favorecido.
- esta comunidade do Orkut, apenas a maior dentre várias que foram criadas.
O movimento de apoio a Thafne é um exemplo entre poucos de união total da nação em prol de uma causa.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On the state of programs and browsers
Basically, there are basically (not exhaustively) 2 kinds of programs one can run in a computer nowadays:
1.1. A program that is installed, permanent, has direct access to the Operating System, can draw whatever it wants, modify files, interact with other programs and so on; 1.2. A program that is transient, fetched from someone else's server at run time, interpreted, rendered and executed by another program that bridges the access of that transient program to the OS and other things.
Meanwhile, web browsers have basically (not exhaustively) two use cases:
2.1. Display text, pictures, videos hosted on someone else's computer; 2.2. Execute incredibly complex programs that are fetched at run time, executed and so on -- you get it, it's the same 1.2.
These two use cases for browsers are at big odds with one another. While stretching itsel f to become more and more a platform for programs that can do basically anything (in the 1.1 sense) they are still restricted to being an 1.2 platform. At the same time, websites that were supposed to be on 2.1 sometimes get confused and start acting as if they were 2.2 -- and other confusing mixed up stuff.
I could go hours in philosophical inquiries on the nature of browsers, how rewriting everything in JavaScript is not healthy or where everything went wrong, but I think other people have done this already.
One thing that bothers me a lot, though, is that computers can do a lot of things, and with the internet and in the current state of the technology it's fairly easy to implement tools that would help in many aspects of human existence and provide high-quality, useful programs, with the help of a server to coordinate access, store data, authenticate users and so on many things are possible. However, due to the nature of UI in the browser, it's very hard to get any useful tool to users.
Writing a UI, even the most basic UI imaginable (some text input boxes and some buttons, or a table) can take a long time, always more than the time necessary to code the actual core features of whatever program is being developed -- and that is considering that the person capable of writing interesting programs that do the functionality in the backend are also capable of interacting with JavaScript and the giant amount of frameworks, transpilers, styling stuff, CSS, the fact that all this is built on top of HTML and so on.
This is not good.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A big Ethereum problem that is fixed by Drivechain
While reading the following paragraphs, assume Drivechain itself will be a "smart contract platform", like Ethereum. And that it won't be used to launch an Ethereum blockchain copy, but instead each different Ethereum contract could be turned into a different sidechain under BIP300 rules.
A big Ethereum problem
Anyone can publish any "contract" to Ethereum. Often people will come up with somewhat interesting ideas and publish them. Since they want money they will add an unnecessary token and use that to bring revenue to themselves, gamify the usage of their contract somehow, and keep some control over the supposedly open protocol they've created by keeping a majority of the tokens. They will use the profits on marketing and branding, have a visual identity, a central website and a forum with support personnel and so on: their somewhat interesting idea have become a full-fledged company.
If they have success then another company will appear in the space and copy the idea, launch it using exactly the same strategy with a tweak, then try to capture the customers of the first company and new people. And then another, and another, and another. Very often these contracts require some network effect to work, i.e., they require people to be using it so others will use it. The fact that the market is now split into multiple companies offering roughly the same product hurts that, such that none of these protocols get ever enough usage to become really useful in the way they were first conceived. At this point it doesn't matter though, they get some usage, and they use that in their marketing material. It becomes a race to pump the value of the tokens and the current usage is just another point used for that purpose. The company will even start giving out money to attract new users and other weird moves that have no relationship with the initial somewhat intereting idea.
Once in a lifetime it happens that the first implementer of these things is not a company seeking profits, but some altruistic developer or company that believes in Ethereum and wants to see it grow -- or more likely someone financed by the Ethereum Foundation, which allegedly doesn't like these token schemes and would prefer everybody to use the token they issued first, the ETH --, but that's a fruitless enterprise because someone else will copy that idea anyway and turn it into a company as described above.
How Drivechain fixes it
In the Drivechain world, if someone had an idea, they would -- as it happens all the time with Bitcoin things -- publish it in a public forum. Other members of the community would evaluate that idea, add or remove things, all interested parties would contribute to make it the best possible incarnation of that idea. Once the design was settled, someone would volunteer to start writing the code to turn that idea into a sidechain. Maybe some company would fund those efforts and then more people would join. It's not a perfect process and one that often involves altruism, but Bitcoin inspires people to do these things.
Slowly, the thing would get built, tested, activated as a sidechain on testnet, tested more, and at this point luckily the entire community of interested Bitcoin users and miners would have grown to like that idea and see its benefits. It could then be proposed to be activated according to BIP300 rules.
Once it was activated, the entire pool of interested users would join it. And it would be impossible for someone else to create a copy of that because everybody would instantly notice it was a copy. There would be no token, no one profiting directly from the operations of that "smart contract". And everybody would be incentivized to join and tell others to join that same sidechain since the network effect was already the biggest there, they will know more network effect would only be good for everybody involved, and there would be no competing marketing and free token giveaways from competing entities.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Democracy as a failed open-network protocol
In the context of protocols for peer-to-peer open computer networks -- those in which new actors can freely enter and immediately start participating in the protocol --, without any central entity, specially without any central human mind judging things from the top --, it's common for decisions about the protocol to be thought taking in consideration all the possible ways a rogue peer can disrupt the entire network, abuse it, make the experience terrible for others. The protocol design must account for all incentives in play and how they will affect each participant, always having in mind that each participant may be acting in a purely egoistical self-interested manner, not caring at all about the health of the network (even though most participants won't be like that). So a protocol, to be successful, must have incentives aligned such that self-interested actors cannot profit by hurting others and will gain most by cooperating (whatever that means in the envisaged context), or there must be a way for other peers to detect attacks and other kinds of harm or attempted harm and neutralize these.
Since computers are very fast, protocols can be designed to be executed many times per day by peers involved, and since the internet is a very open place to which people of various natures are connected, many open-network protocols with varied goals have been tried in large scale and most of them failed and were shut down (or kept existing, but offering a bad experience and in a much more limited scope than they were expected to be). Often the failure of a protocol leads to knowledge about its shortcomings being more-or-less widespread and agreed upon, and these lead to the development of a better protocol the next time something with similar goals is tried.
Ideally democracies are supposed to be an open-entry network in the same sense as these computer networks, and although that is a noble goal, it's one full of shortcomings. Democracies are supposed to the governing protocol of States that have the power to do basically anything with the lives of millions of citizens.
One simple inference we may take from the history of computer peer-to-peer protocols is that the ones that work better are those that are simple and small in scope (Bitcoin, for example, is very simple; BitTorrent is also very simple and very limited in what it tries to do and the number of participants that get involved in each run of the protocol).
Democracies, as we said above, are the opposite of that. Besides being in a very hard position to achieve success as an open protocol, democracies also suffer from the fact that they take a long time to run, so it's hard to see where it is failing every time.
The fundamental incentives of democracy, i.e. the rules of the protocol, posed by the separation of powers and checks-and-balances are basically the same in every place and in every epoch since the XIII century, and even today most people who dedicate their lives to the subject still don't see how they're completely flawed.
The system of checks and balances was thought from the armchair of a couple of political theorists who had never done anything like that in their lives, didn't have any experience dealing with very adversarial environments like the internet -- and probably couldn't even imagine that the future users of their network were going to be creatures completely different than themselves and their fellow philosophers and aristocrats who all shared the same worldview (and how fast that future would come!).
Also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28On HTLCs and arbiters
This is another attempt and conveying the same information that should be in Lightning and its fake HTLCs. It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
- Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
- Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
- MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28neuron.vim
I started using this neuron thing to create an update this same zettelkasten, but the existing vim plugin had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Bolo
It seems that from 1987 to around 2000 there was a big community of people who played this game called "Bolo". It was a game in which people controlled a tank and killed others while trying to capture bases in team matches. Always 2 teams, from 2 to 16 total players, games could last from 10 minutes to 12 hours. I'm still trying to understand all this.
The game looks silly from some videos you can find today, but apparently it was very deep in strategy because people developed strategy guides and wrote extensively about it and Netscape even supported
bolo:
URLs out of the box.The two most important elements on the map are pillboxes and bases. Pillboxes are originally neutral, meaning that they shoot at every tank that happens to get in its range. They shoot fast and with deadly accuracy. You can shoot the pillbox with your tank, and you can see how damaged it is by looking at it. Once the pillbox is subdued, you may run over it, which will pick it up. You may place the pillbox where you want to put it (where it is clear), if you've enough trees to build it back up. Trees are harvested by sending your man outside your tank to forest the trees. Your man (also called a builder) can also lay mines, build roads, and build walls. Once you have placed a pillbox, it will not shoot at you, but only your enemies. Therefore, pillboxes are often used to protect your bases.
That quote was taken from this "augmented FAQ" written by some user. Apparently there were many FAQs for this game. A FAQ is after all just a simple, clear and direct to the point way of writing about anything, previously known as summa[^summa-k], it doesn't have to be related to any actually frequently asked question.
More unexpected Bolo writings include an etiquette guide, an anthropology study and some wonderings on the reverse pill war tactic.
[^summa-k]: It's not the same thing, but I couldn't help but notice the similarity.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Module Linker
A browser extension that reads source code on GitHub and tries to find links to imported dependencies so you can click on them and navigate through either GitHub or package repositories or base language documentation. Works for many languages at different levels of completeness.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Liquidificador
A fragilidade da comunicação humana fica clara quando alguém liga o liquidificador.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28An argument according to which fractional-reserve banking is merely theft and nothing else
Fractional-reserve banking isn't anything else besides transfer of money from the people at large to bankers.
It has been argued that fractional-reserve banking serves a purpose in making new funds available out of no one's pocket for lending and thus directing resources to productive borrowers. This financing method is preferrable to the more conservative way of borrowing funds directly from a saver and then using that to lend to others because it uses new money, money not tied to anyone else before, and thus it's cheaper and involves less friction.
Instead, what happens is that someone must at all times be the owner of each money. So when banks use their power of generating fractional-reserve funds, they are creating new money and they are the owners – at this point, a theft occurs from the public at large to them – and then they proceed to lend their own money. From this description it is clear that the fact that bank customers have previously deposited their own funds in the banks' vaults have no direct relation with the fact that banks created money afterwards, there's only a legal relation and the fact that banks may need cash deposited by its customers to redeem borrowers claims, but even that wouldn't be necessary if banks were allowed to print their own cash.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Using Spacechains and Fedimint to solve scaling
What if instead of trying to create complicated "layer 2" setups involving noveau cryptographic techniques we just did the following:
- we take that Fedimint source code and remove the "mint" stuff, and just use their federation stuff secure coins with multisig;
- then we make a spacechain;
- and we make the federations issue multisig-btc tokens on it;
- and then we put some uniswap-like thing in there to allow these tokens to be exchanged freely.
Why?
The recent spike in fees caused by Ordinals and BRC-20 shitcoinery has shown that Lightning isn't a silver bullet. Channels are too fragile, it costs a lot to open a channel under a high fee environment, to run a routing node and so on.
People who want to keep using Lightning are instead flocking to the big Lightning custodial providers: WalletofSatoshi, ZEBEDEE, OpenNode and so on. We could leverage that trust people have in these companies (and individuals) operating shadow Lightning providers and turn each of these into a btc-token issuer. Each issue their own token, transactions flow freely. Each person can hold only assets from the issuers they trust more.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Thomas Kuhn sequer menciona o "método científico"
O que define uma ciência é o recorte de uma realidade a partir de um paradigma que todos os que entram no campo daquela ciência aceitam como verdade. Pronto.
O "método científico" não é nem necessário nem suficiente (apesar de que ele mesmo precisa pressupor uma série de axiomas para funcionar, do contrário a pessoa ficaria tendo que testar cada sílaba dos seus experimentos e cada um desses testes seria impossível de realizar pois eles necessariamente precisariam de outros conhecimentos prévios etc.).
Por isso - a física teórica pode ser uma ciência embora não faça experimentos; - e a economia pode ser uma ciência embora não faça experimentos (tá bom, existem pessoas que insistem em fazer "experimentos" em economia, mas que na verdade são só uma enorme perda de tempo baseada em estatísticas, mas isso não importa, até essas coisas podem ter algum valor desde que não se as entenda como sendo parte de um método científico); - e a história pode ser uma ciência, por mais estranho que isso pareça, bastando apenas que o historiador junte fatos de antigamente tendo como pressuposto um paradigma, por exemplo, de que existe um sentido da história, ou sei lá (não acho que exista essa ciência da história hoje em dia, me parece que cada historiador está fazendo uma coisa diferente, sem muitos paradigmas além do bom senso); - e a biologia pode ser uma ciência mesmo consistindo unicamente num longo esforço de classificação, e de fato é hoje, já que vê cada espécie e suas partes como frutos de um processo evolutivo cujas bases constituem um paradigma, e nega outras visões, como a teleologia.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Drivechain comparison with Ethereum
Ethereum and other "smart contract platforms" capable of running turing-complete code and "developer-friendly" mindset and community have been running for years and they were able to produce a very low number of potentially useful "contracts".
What are these contracts, actually? (Considering Ethereum, but others are similar:) they are sidechains that run inside the Ethereum blockchain (and thus their verification and data storage are forced upon all Ethereum nodes). Users can peg-in to a contract by depositing money on it and peg-out by making a contract operation that sends money to a normal Ethereum address.
Now be generous and imagine these platforms are able to produce 3 really cool, useful ideas (out of many thousands of attempts): Bitcoin can copy these, turn them into 3 different sidechains, each running fixed, specific, optimized code. Bitcoin users can now opt to use these platforms by transferring coins to it – all that without damaging the nodes or the consensus protocol that has been running for years, and without forcing anyone to be aware of these chains.
The process of turning a useful idea into a sidechain doesn't come spontaneously, and can't be done by a single company (like often happens in Ethereum-land), it must be acknowledge by a rough consensus in the Bitcoin community that that specific sidechain with that specific design is a desirable thing, and ultimately approved by miners, as they're the ones that are going to be in charge of that.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Just malinvestiment
Traditionally the Austrian Theory of Business Cycles has been explained and reworked in many ways, but the most widely accepted version (or the closest to the Mises or Hayek views) view is that banks (or the central bank) cause the general interest rate to decline by creation of new money and that prompts entrepreneurs to invest in projects of longer duration. This can be confusing because sometimes entrepreneurs embark in very short-time projects during one of these bubbles and still contribute to the overall cycle.
The solution is to think about the "longer term" problem is to think of the entire economy going long-term, not individual entrepreneurs. So if one entrepreneur makes an investiment in a thing that looks simple he may actually, knowingly or not, be inserting himself in a bigger machine that is actually involved in producing longer-term things. Incidentally this thinking also solves the biggest criticism of the Austrian Business Cycle Theory: that of the rational expectations people who say: "oh but can't the entrepreneurs know that the interest rate is artificially low and decide to not make long-term investiments?" ("and if they don't know they should lose money and be replaced like in a normal economy flow blablabla?"). Well, the answer is that they are not really relying on the interest rate, they are only looking for profit opportunities, and this is the key to another confusion that has always followed my thinkings about this topic.
If a guy opens a bar in an area of a town where many new buildings are being built during a "housing bubble" he may not know, but he is inserting himself right into the eye of that business cycle. He expects all these building projects to continue, and all the people involved in that to be getting paid more and be able to spend more at his bar and so on. That is a bet that may or may not end up paying.
Now what does that bar investiment has to do with the interest rate? Nothing. It is just a guy who saw a business opportunity in a place where hungry people with money had no bar to buy things in, so he opened a bar. Additionally the guy has made some calculations about all the ending, starting and future building projects in the area, and then the people that would live or work in that area afterwards (after all the buildings were being built with the expectation of being used) and so on, there is no interest rate calculations involved. And yet that may be a malinvestiment because some building projects will end up being canceled and the expected usage of the finished ones will turn out to be smaller than predicted.
This bubble may have been caused by a decline in interest rates that prompted some people to start buying houses that they wouldn't otherwise, but this is just a small detail. The bubble can only be kept going by a constant influx of new money into the economy, but the focus on the interest rate is wrong. If new money is printed and used by the government to buy ships then there will be a boom and a bubble in the ship market, and that involves all the parts of production process of ships and also bars that will be opened near areas of the town where ships are built and new people are being hired with higher salaries to do things that will eventually contribute to the production of ships that will then be sold to the government.
It's not interest rates or the length of the production process that matters, it's just printed money and malinvestiment.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Lightning and its fake HTLCs
Lightning is terrible but can be very good with two tweaks.
How Lightning would work without HTLCs
In a world in which HTLCs didn't exist, Lightning channels would consist only of balances. Each commitment transaction would have two outputs: one for peer
A
, the other for peerB
, according to the current state of the channel.When a payment was being attempted to go through the channel, peers would just trust each other to update the state when necessary. For example:
- Channel
AB
's balances areA[10:10]B
(in sats); A
sends a 3sat payment throughB
toC
;A
asksB
to route the payment. ChannelAB
doesn't change at all;B
sends the payment toC
,C
accepts it;- Channel
BC
changes fromB[20:5]C
toB[17:8]C
; B
notifiesA
the payment was successful,A
acknowledges that;- Channel
AB
changes fromA[10:10]B
toA[7:13]B
.
This in the case of a success, everything is fine, no glitches, no dishonesty.
But notice that
A
could have refused to acknowledge that the payment went through, either because of a bug, or because it went offline forever, or because it is malicious. Then the channelAB
would stay asA[10:10]B
andB
would have lost 3 satoshis.How Lightning would work with HTLCs
HTLCs are introduced to remedy that situation. Now instead of commitment transactions having always only two outputs, one to each peer, now they can have HTLC outputs too. These HTLC outputs could go to either side dependending on the circumstance.
Specifically, the peer that is sending the payment can redeem the HTLC after a number of blocks have passed. The peer that is receiving the payment can redeem the HTLC if they are able to provide the preimage to the hash specified in the HTLC.
Now the flow is something like this:
- Channel
AB
's balances areA[10:10]B
; A
sends a 3sat payment throughB
toC
:A
asksB
to route the payment. Their channel changes toA[7:3:10]B
(the middle number is the HTLC).B
offers a payment toC
. Their channel changes fromB[20:5]C
toB[17:3:5]C
.C
tellsB
the preimage for that HTLC. Their channel changes fromB[17:3:5]C
toB[17:8]C
.B
tellsA
the preimage for that HTLC. Their channel changes fromA[7:3:10]B
toA[7:13]B
.
Now if
A
wants to trickB
and stop respondingB
doesn't lose money, becauseB
knows the preimage,B
just needs to publish the commitment transactionA[7:3:10]B
, which gives him 10sat and then redeem the HTLC using the preimage he got fromC
, which gives him 3 sats more.B
is fine now.In the same way, if
B
stops responding for any reason,A
won't lose the money it put in that HTLC, it can publish the commitment transaction, get 7 back, then redeem the HTLC after the certain number of blocks have passed and get the other 3 sats back.How Lightning doesn't really work
The example above about how the HTLCs work is very elegant but has a fatal flaw on it: transaction fees. Each new HTLC added increases the size of the commitment transaction and it requires yet another transaction to be redeemed. If we consider fees of 10000 satoshis that means any HTLC below that is as if it didn't existed because we can't ever redeem it anyway. In fact the Lightning protocol explicitly dictates that if HTLC output amounts are below the fee necessary to redeem them they shouldn't be created.
What happens in these cases then? Nothing, the amounts that should be in HTLCs are moved to the commitment transaction miner fee instead.
So considering a transaction fee of 10000sat for these HTLCs if one is sending Lightning payments below 10000sat that means they operate according to the unsafe protocol described in the first section above.
It is actually worse, because consider what happens in the case a channel in the middle of a route has a glitch or one of the peers is unresponsive. The other node, thinking they are operating in the trustless protocol, will proceed to publish the commitment transaction, i.e. close the channel, so they can redeem the HTLC -- only then they find out they are actually in the unsafe protocol realm and there is no HTLC to be redeemed at all and they lose not only the money, but also the channel (which costed a lot of money to open and close, in overall transaction fees).
One of the biggest features of the trustless protocol are the payment proofs. Every payment is identified by a hash and whenever the payee releases the preimage relative to that hash that means the payment was complete. The incentives are in place so all nodes in the path pass the preimage back until it reaches the payer, which can then use it as the proof he has sent the payment and the payee has received it. This feature is also lost in the unsafe protocol: if a glitch happens or someone goes offline on the preimage's way back then there is no way the preimage will reach the payer because no HTLCs are published and redeemed on the chain. The payee may have received the money but the payer will not know -- but the payee will lose the money sent anyway.
The end of HTLCs
So considering the points above you may be sad because in some cases Lightning doesn't use these magic HTLCs that give meaning to it all. But the fact is that no matter what anyone thinks, HTLCs are destined to be used less and less as time passes.
The fact that over time Bitcoin transaction fees tend to rise, and also the fact that multipart payment (MPP) are increasedly being used on Lightning for good, we can expect that soon no HTLC will ever be big enough to be actually worth redeeming and we will be at a point in which not a single HTLC is real and they're all fake.
Another thing to note is that the current unsafe protocol kicks out whenever the HTLC amount is below the Bitcoin transaction fee would be to redeem it, but this is not a reasonable algorithm. It is not reasonable to lose a channel and then pay 10000sat in fees to redeem a 10001sat HTLC. At which point does it become reasonable to do it? Probably in an amount many times above that, so it would be reasonable to even increase the threshold above which real HTLCs are made -- thus making their existence more and more rare.
These are good things, because we don't actually need HTLCs to make a functional Lightning Network.
We must embrace the unsafe protocol and make it better
So the unsafe protocol is not necessarily very bad, but the way it is being done now is, because it suffers from two big problems:
- Channels are lost all the time for no reason;
- No guarantees of the proof-of-payment ever reaching the payer exist.
The first problem we fix by just stopping the current practice of closing channels when there are no real HTLCs in them.
That, however, creates a new problem -- or actually it exarcebates the second: now that we're not closing channels, what do we do with the expired payments in them? These payments should have either been canceled or fulfilled before some block x, now we're in block x+1, our peer has returned from its offline period and one of us will have to lose the money from that payment.
That's fine because it's only 3sat and it's better to just lose 3sat than to lose both the 3sat and the channel anyway, so either one would be happy to eat the loss. Maybe we'll even split it 50/50! No, that doesn't work, because it creates an attack vector with peers becoming unresponsive on purpose on one side of the route and actually failing/fulfilling the payment on the other side and making a profit with that.
So we actually need to know who is to blame on these payments, even if we are not going to act on that imediatelly: we need some kind of arbiter that both peers can trust, such that if one peer is trying to send the preimage or the cancellation to the other and the other is unresponsive, when the unresponsive peer comes back, the arbiter can tell them they are to blame, so they can willfully eat the loss and the channel can continue. Both peers are happy this way.
If the unresponsive peer doesn't accept what the arbiter says then the peer that was operating correctly can assume the unresponsive peer is malicious and close the channel, and then blacklist it and never again open a channel with a peer they know is malicious.
Again, the differences between this scheme and the current Lightning Network are that:
a. In the current Lightning we always close channels, in this scheme we only close channels in case someone is malicious or in other worst case scenarios (the arbiter is unresponsive, for example). b. In the current Lightning we close the channels without having any clue on who is to blame for that, then we just proceed to reopen a channel with that same peer even in the case they were actively trying to harm us before.
What is missing? An arbiter.
The Bitcoin blockchain is the ideal arbiter, it works in the best possible way if we follow the trustless protocol, but as we've seen we can't use the Bitcoin blockchain because it is expensive.
Therefore we need a new arbiter. That is the hard part, but not unsolvable. Notice that we don't need an absolutely perfect arbiter, anything is better than nothing, really, even an unreliable arbiter that is offline half of the day is better than what we have today, or an arbiter that lies, an arbiter that charges some satoshis for each resolution, anything.
Here are some suggestions:
- random nodes from the network selected by an algorithm that both peers agree to, so they can't cheat by selecting themselves. The only thing these nodes have to do is to store data from one peer, try to retransmit it to the other peer and record the results for some time.
- a set of nodes preselected by the two peers when the channel is being opened -- same as above, but with more handpicked-trust involved.
- some third-party cloud storage or notification provider with guarantees of having open data in it and some public log-keeping, like Twitter, GitHub or a Nostr relay;
- peers that get paid to do the job, selected by the fact that they own some token (I know this is stepping too close to the shitcoin territory, but could be an idea) issued in a Spacechain;
- a Spacechain itself, serving only as the storage for a bunch of
OP_RETURN
s that are published and tracked by these Lightning peers whenever there is an issue (this looks wrong, but could work).
Key points
- Lightning with HTLC-based routing was a cool idea, but it wasn't ever really feasible.
- HTLCs are going to be abandoned and that's the natural course of things.
- It is actually good that HTLCs are being abandoned, but
- We must change the protocol to account for the existence of fake HTLCs and thus make the bulk of the Lightning Network usage viable again.
See also
- Channel
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28hledger-web
A Haskell app that uses Miso and hledger's Haskell libraries plus ghcjs to be compiled to a web page, and then adds optional remoteStorage so you can store your ledger data somewhere else.
This was my introduction to Haskell and also built at a time I thought remoteStorage was a good idea that solved many problems, and that it could use some help in the form of just yet another somewhat-useless-but-cool project using it that could be added to their wiki.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Reasons why Lightning is not that great
Some Bitcoiners, me included, were fooled by hyperbolic discourse that presented Lightning as some magical scaling solution with no flaws. This is an attempt to list some of the actual flaws uncovered after 5 years of experience. The point of this article is not to say Lightning is a complete worthless piece of crap, but only to highlight the fact that Bitcoin needs to put more focus on developing and thinking about other scaling solutions (such as Drivechain, less crappy and more decentralized trusted channels networks and statechains).
Unbearable experience
Maintaining a node is cumbersome, you have to deal with closed channels, allocating funds, paying fees unpredictably, choosing new channels to open, storing channel state backups -- or you'll have to delegate all these decisions to some weird AI or third-party services, it's not feasible for normal people.
Channels fail for no good reason all the time
Every time nodes disagree on anything they close channels, there have been dozens, maybe hundreds, of bugs that lead to channels being closed in the past, and implementors have been fixing these bugs, but since these node implementations continue to be worked on and new features continue to be added we can be quite sure that new bugs continue to be introduced.
Trimmed (fake) HTLCs are not sound protocol design
What would you tell me if I presented a protocol that allowed for transfers of users' funds across a network of channels and that these channels would pledge to send the money to miners while the payment was in flight, and that these payments could never be recovered if a node in the middle of the hop had a bug or decided to stop responding? Or that the receiver could receive your payment, but still claim he didn't, and you couldn't prove that at all?
These are the properties of "trimmed HTLCs", HTLCs that are uneconomical to have their own UTXO in the channel presigned transaction bundles, therefore are just assumed to be there while they are not (and their amounts are instead added to the fees of the presigned transaction).
Trimmed HTLCs, like any other HTLC, have timelocks, preimages and hashes associated with them -- which are properties relevant to the redemption of actual HTLCs onchain --, but unlike actual HTLCs these things have no actual onchain meaning since there is no onchain UTXO associated with them. This is a game of make-believe that only "works" because (1) payment proofs aren't worth anything anyway, so it makes no sense to steal these; (2) channels are too expensive to setup; (3) all Lightning Network users are honest; (4) there are so many bugs and confusion in a Lightning Network node's life that events related to trimmed HTLCs do not get noticed by users.
Also, so far these trimmed HTLCs have only been used for very small payments (although very small payments probably account for 99% of the total payments), so it is supposedly "fine" to have them. But, as fees rise, more and more HTLCs tend to become fake, which may make people question the sanity of the design.
Tadge Dryja, one of the creators of the Lightning Network proposal, has been critical of the fact that these things were allowed to creep into the BOLT protocol.
Routing
Routing is already very bad today even though most nodes have a basically 100% view of the public network, the reasons being that some nodes are offline, others are on Tor and unreachable or too slow, channels have the balance shifted in the wrong direction, so payments fail a lot -- which leads to the (bad) solution invented by professional node runners and large businesses of probing the network constantly in order to discard bad paths, this creates unnecessary load and increases the risk of channels being dropped for no good reason.
As the network grows -- if it indeed grow and not centralize in a few hubs -- routing tends to become harder and harder.
While each implementation team makes their own decisions with regard to how to best way to route payments and these decisions may change at anytime, it's worth noting, for example, that CLN will use MPP to split up any payment in any number of chunks of 10k satoshis, supposedly to improve routing success rates. While this often backfires and causes payments to fail when they should have succeeded, it also contributes to making it so there are proportionally more fake HTLCs than there should be, as long as the threshold for fake HTLCs is above 10k.
Payment proofs are somewhat useless
Even though payment proofs were seen by many (including me) as one of the great things about Lightning, the sad fact is that they do not work as proofs if people are not aware of the fact that they are proofs. Wallets do all they can to hide these details from users because it is considered "bad UX" and low-level implementors do not care very much to talk about them at all. There have been attempts from Lightning Labs to get rid of the payment proofs entirely (which at the time to me sounded like a terrible idea, but now I realize they were not wrong).
Here's a piece of anecdote: I've personally witnessed multiple episodes in which Phoenix wallet released the preimage without having actually received the payment (they did receive a minor part of the payment, but the payment was split in many parts). That caused my service, @lntxbot, to mark the outgoing payment as complete, only then to have to endure complaints from the users because the receiver side, Phoenix, had not received the full amount. In these cases, if the protocol and the idea of preimages as payment proofs be respected, should I have been the one in charge of manually fixing user balances?
Another important detail: when an HTLC is sent and then something goes wrong with the payment the channel has to be closed in order to redeem that payment. When the redeemer is on the receiver side, the very act of redeeming should cause the preimage to be revealed and a proof of payment to be made available for the sender, who can then send that back to the previous hop and the payment is proven without any doubt. But when this happens for fake HTLCs (which is the vast majority of payments, as noted above) there is no place in the world for a preimage and therefore there are no proofs available. A channel is just closed, the payer loses money but can't prove a payment. It also can't send that proof back to the previous hop so he is forced to say the payment failed -- even if it wasn't him the one who declared that hop a failure and closed the channel, which should be a prerequisite. I wonder if this isn't the source of multiple bugs in implementations that cause channels to be closed unnecessarily. The point is: preimages and payment proofs are mostly a fiction.
Another important fact is that the proofs do not really prove anything if the keypair that signs the invoice can't be provably attached to a real world entity.
LSP-centric design
The first Lightning wallets to show up in the market, LND as a desktop daemon (then later with some GUIs on top of it like Zap and Joule) and Anton's BLW and Eclair wallets for mobile devices, then later LND-based mobile wallets like Blixt and RawTX, were all standalone wallets that were self-sufficient and meant to be run directly by consumers. Eventually, though, came Breez and Phoenix and introduced the "LSP" model, in which a server would be trusted in various forms -- not directly with users' funds, but with their privacy, fees and other details -- but most importantly that LSP would be the primary source of channels for all users of that given wallet software. This was all fine, but as time passed new features were designed and implemented that assumed users would be running software connected to LSPs. The very idea of a user having a standalone mobile wallet was put out of question. The entire argument for implementation of the bolt12 standard, for example, hinged on the assumption that mobile wallets would have LSPs capable of connecting to Google messaging services and being able to "wake up" mobile wallets in order for them to receive payments. Other ideas, like a complicated standard for allowing mobile wallets to receive payments without having to be online all the time, just assume LSPs always exist; and changes to the expected BOLT spec behavior with regards to, for example, probing of mobile wallets.
Ark is another example of a kind of LSP that got so enshrined that it become a new protocol that depends on it entirely.
Protocol complexity
Even though the general idea of how Lightning is supposed to work can be understood by many people (as long as these people know how Bitcoin works) the Lightning protocol is not really easy: it will take a long time of big dedication for anyone to understand the details about the BOLTs -- this is a bad thing if we want a world of users that have at least an idea of what they are doing. Moreover, with each new cool idea someone has that gets adopted by the protocol leaders, it increases in complexity and some of the implementors are kicked out of the circle, therefore making it easier for the remaining ones to proceed with more and more complexity. It's the same process by which Chrome won the browser wars, kicked out all competitors and proceeded to make a supposedly open protocol, but one that no one can implement as it gets new and more complex features every day, all envisioned by the Chrome team.
Liquidity issues?
I don't believe these are a real problem if all the other things worked, but still the old criticism that Lightning requires parking liquidity and that has a cost is not a complete non-issue, specially given the LSP-centric model.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Carl R. Rogers sobre a ciência
Creio que o objetivo primário da ciência é fornecer uma hipótese, uma convicção e uma fé mais seguras e que satisfaçam melhor o próprio investigador. Na medida em que o cientista procura provar qualquer coisa a alguém -- um erro em que incorri mais de uma vez --, creio que ele está se servindo da ciência para remediar uma insegurança pessoal, desviando-a do seu verdadeiro papel criativo a serviço do indivíduo.
Tornar-se Pessoa, página aleatória
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28contratos.alhur.es
A website that allowed people to fill a form and get a standard Contrato de Locação.
Better than all the other "templates" that float around the internet, which are badly formatted
.doc
files.It was fully programmable so other templates could be added later, but I never did. This website made maybe one dollar in Google Ads (and Google has probably stolen these like so many other dollars they did with their bizarre requirements).
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28comentário pertinente de Olavo de Carvalho sobre atribuições indevidas de acontecimentos à "ordem espontânea"
Eis aqui um exemplo entre outros mil, extraído das minhas apostilas de aulas, de como se analisam as relações entre fatores deliberados e casuais na ação histórica. O sr, Beltrão está INFINITAMENTE ABAIXO da possibilidade de discutir essas coisas, e por isso mesmo me atribui uma simploriedade que é dele próprio e não minha:
Já citei mil vezes este parágrafo de Georg Jellinek e vou citá-lo de novo: “Os fenômenos da vida social dividem-se em duas classes: aqueles que são determinados essencialmente por uma vontade diretriz e aqueles que existem ou podem existir sem uma organização devida a atos de vontade. Os primeiros estão submetidos necessariamente a um plano, a uma ordem emanada de uma vontade consciente, em oposição aos segundos, cuja ordenação repousa em forças bem diferentes.”
Essa distinção é crucial para os historiadores e os analistas estratégicos não porque ela é clara em todos os casos, mas precisamente porque não o é. O erro mais comum nessa ordem de estudos reside em atribuir a uma intenção consciente aquilo que resulta de uma descontrolada e às vezes incontrolável combinação de forças, ou, inversamente, em não conseguir enxergar, por trás de uma constelação aparentemente fortuita de circunstâncias, a inteligência que planejou e dirigiu sutilmente o curso dos acontecimentos.
Exemplo do primeiro erro são os Protocolos dos Sábios de Sião, que enxergam por trás de praticamente tudo o que acontece de mau no mundo a premeditação maligna de um número reduzidos de pessoas, uma elite judaica reunida secretamente em algum lugar incerto e não sabido.
O que torna essa fantasia especialmente convincente, decorrido algum tempo da sua publicação, é que alguns dos acontecimentos ali previstos se realizam bem diante dos nossos olhos. O leitor apressado vê nisso uma confirmação, saltando imprudentemente da observação do fato à imputação da autoria. Sim, algumas das idéias anunciadas nos Protocolos foram realizadas, mas não por uma elite distintamente judaica nem muito menos em proveito dos judeus, cuja papel na maioria dos casos consistiu eminentemente em pagar o pato. Muitos grupos ricos e poderosos têm ambições de dominação global e, uma vez publicado o livro, que em certos trechos tem lances de autêntica genialidade estratégica de tipo maquiavélico, era praticamente impossível que nada aprendessem com ele e não tentassem por em prática alguns dos seus esquemas, com a vantagem adicional de que estes já vinham com um bode expiatório pré-fabricado. Também é impossível que no meio ou no topo desses grupos não exista nenhum judeu de origem. Basta portanto um pouquinho de seletividade deformante para trocar a causa pelo efeito e o inocente pelo culpado.
Mas o erro mais comum hoje em dia não é esse. É o contrário: é a recusa obstinada de enxergar alguma premeditação, alguma autoria, mesmo por trás de acontecimentos notavelmente convergentes que, sem isso, teriam de ser explicados pela forca mágica das coincidências, pela ação de anjos e demônios, pela "mão invisível" das forças de mercado ou por hipotéticas “leis da História” ou “constantes sociológicas” jamais provadas, que na imaginação do observador dirigem tudo anonimamente e sem intervenção humana.
As causas geradoras desse erro são, grosso modo:
Primeira: Reduzir as ações humanas a efeitos de forças impessoais e anônimas requer o uso de conceitos genéricos abstratos que dão automaticamente a esse tipo de abordagem a aparência de coisa muito científica. Muito mais científica, para o observador leigo, do que a paciente e meticulosa reconstituição histórica das cadeias de fatos que, sob um véu de confusão, remontam às vezes a uma autoria inicial discreta e quase imperceptível. Como o estudo dos fenômenos histórico-políticos é cada vez mais uma ocupação acadêmica cujo sucesso depende de verbas, patrocínios, respaldo na mídia popular e boas relações com o establishment, é quase inevitável que, diante de uma questão dessa ordem, poucos resistam à tentação de matar logo o problema com duas ou três generalizações elegantes e brilhar como sábios de ocasião em vez de dar-se o trabalho de rastreamentos históricos que podem exigir décadas de pesquisa.
Segunda: Qualquer grupo ou entidade que se aventure a ações histórico-políticas de longo prazo tem de possuir não só os meios de empreendê-las, mas também, necessariamente, os meios de controlar a sua repercussão pública, acentuando o que lhe convém e encobrindo o que possa abortar os resultados pretendidos. Isso implica intervenções vastas, profundas e duradouras no ambiente mental. [Etc. etc. etc.]
(no facebook em 17 de julho de 2013)
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Gold is not useless
If there's something all common people believe about gold is that it is useless[^1]. Austrian economists and libertarians in general that argue against central banks or defend a primitive gold standard are often charged with that accusation: that gold is useless, it has no use in the industry, it serves no purpose besides ornamental, so it is a silly commodity, a luxurious one, and that it would be almost immoral to have such a thing in a so central position in an economy such as the position of money.
I've seen libertarians in general argue such things as: "it is used in some dental operations", which means people make dental prosthesis of gold, something that fits in same category of jewelry, I would say.
There's also the argument of electronic connectors. That's something that appears to be true, but wouldn't suffice the anti-gold arguments. The fact remains that, besides its uses as money -- because gold is still considered to be a form money even now that it doesn't have that position formally in any country (otherwise it wouldn't be considered as an "investment" or "value store" everywhere) -- gold is used mainly for ornamental purposes[^2].
All that is a hassle for libertarians in general. Even the Mises Regression Theory wouldn't solve that problem of people skeptical of gold due to its immoral nature. That problem is solved once you read what is written in the chapter 17 from Richard Cantillon's Essay on Economic Theory^3 (page 103):
Gold and silver are capable of serving not only the same purpose as tin and copper, but also most of the purposes of lead and iron. They have this further advantage over other metals in that they are not consumed by fire and are so durable that they may be considered permanent. It is not surprising, therefore, that the men who found the other metals useful, valued gold and silver even before they were used in exchange.
So gold is indeed useful. Everybody should already know that. You can even do forks and spoons with gold. You can do furniture with gold, and many other useful stuff. As soon as you grasp this, gold is useful again. It is an useful commodity.
Answering the next question becomes easy: why isn't anyone making gold forks anywhere? The questioner already knows the answer: because it is too expensive for that.
And now the Regression Theory comes with its full force: why is it expensive? Because it has gained a lot of value in the process of becoming money. The value of gold as money is much greater than as a metal used in fork production.
[^1]: see http://www.salon.com/2014/02/02/ignore_sean_hannity_gold_is_useless_partner/ or all answers on https://www.quora.com/Why-is-gold-considered-so-precious-and-why-does-it-have-such-high-prices. [^2]: this https://en.wikipedia.org/wiki/Gold#Modern_applications section on the Wikipedia page for gold is revealing.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to DabbleDB.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Multi-service Graph Reputation protocol
The problem
- Users inside centralized services need to know reputations of other users they're interacting with;
- Building reputation with ratings imposes a big burden on the user and still accomplishes nothing, can be faked, no one cares about these ratings etc.
The ideal solution
Subjective reputation: reputation based on how you rated that person previously, and how other people you trust rated that person, and how other people trusted by people you trust rated that person and so on, in a web-of-trust that actually can give you some insight on the trustworthiness of someone you never met or interacted with.
The problem with the ideal solution
- Most of the times the service that wants to implement this is not as big as Facebook, so it won't have enough people in it for such graphs of reputation to be constructed.
- It is not trivial to build.
My proposed solution:
I've drafted a protocol for an open system based on services publishing their internal reputation records and indexers using these to build graphs, and then serving the graphs back to the services so they can show them to users when it is needed (as HTTP APIs that can be called directly from the user client app or browser).
Crucially, these indexers will gather data from multiple services and cross-link users from these services so the graph is better.
https://github.com/fiatjaf/multi-service-reputation-rfc
The first and single actionable and useful feedback I got, from @bootstrapbandit was that services shouldn't share email addresses in plain text (email addresses and other external relationships users of a service may have are necessary to establish links from users accross services), but I think it is ok if services publish hashes of these email addresses instead. At some point I will update the spec draft and that may have been before the time you're reading this.
Another issue is that services may lie about their reputation records and that will hurt other services and users in these other services that are relying on that data. Maybe indexers will have to do some investigative job here to assert service honesty. Or maybe this entire protocol is just failed and we will actually need a system in which users themselves will publish their own records.
See also
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Filemap
filemap solves the problem of sending and receiving files to and from non-tech people when you don't have a text communication channel with them.
Imagine you want to send files to your grandfather, or you don't use Facebook and your younger cousin who only uses Facebook and doesn't know what is email wants to send you some pictures, it's pretty hard to get a file-sharing channel between people if they're not in the same network. If even the people have a way to upload the files to some hosting service and then share the link everything would still work, but you're not going to write
somehostingservice.com/wHr4y7vFGh0
to your grandfather -- or expect your cousin to do that for you and send you an SMS with dozens of those links.Solution: * Upload your files to https://filemap.xyz/ (you can either upload directly or share links to things already uploaded -- or even links to pages) and pin them to your grandfather's house address; then tell your grandfather to open https://filemap.xyz/ and look for his address. Done. * Tell your younger cousin to visit filemap.xyz and upload all the files to his address, later you open the site and look for his address. There are your files.
Initially this used ipfs-dropzone, but IPFS is broken, os I migrated to WebTorrent, but that required the file sender to be online hosting its own file and the entire idea of this service was to make something easy, so I migrated to Firebase Hosting, which is also terrible and has a broken API, but at least is capable of hosting files. Should have used something like S3.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28The illusion of checks and balances
The website history.com has a list of some of the most important "checks and balances" put in place by the United States Constitution. Here are some of them and how they are not real checks, they're flawed and easily bypassed by malicious peers that manage to enter the network.
The president (head of the executive branch) serves as commander in chief of the military forces, but Congress (legislative branch) appropriates funds for the military and votes to declare war.
As it has happened multiple times, the United States has engaged in many undeclared wars -- and many other military encounters that don't get enough media coverage and weren't even formally acknowledged by the Congress.
Congress has the power of the purse, as it controls the money used to fund any executive actions.
There's a separate power called Federal Reserve which is more-or-less under the influence of the executive branch that is controlled by a single man and has the power of creating unlimited money. It was softly abused by the executive branch since its creation, but since 2008 it has been increasingly having its scope expanded from just influencing the banking sector to also directly using its money to buy all sorts of things and influence all sorts of markets and other actors.
Veto power. Once Congress has passed a bill, the president has the power to veto that bill. In turn, Congress can override a regular presidential veto by a two-thirds vote of both houses.
If you imagine that both the executive and the legislative are 100% dedicated to go against each other the president could veto all bills, but then the legislative could enact them all anyway. Congress has the absolute power here (which can be justified by fact that the congress itself is split into multiple voters, but still this "veto" rule seems more like a gimmick to obscure the process than any actual check).
The Supreme Court and other federal courts (judicial branch) can declare laws or presidential actions unconstitutional, in a process known as judicial review.
This rule gives absolute power to the Supreme Court over any matter. It can use their own personal judgement to veto any bill, cancel any action by the executive, reinterpret any existing law in any manner. There's no check against bad interpretations or judgements, so any absurd thing must be accepted. This should be obvious, and yet the entire system which most people believe to be "checked" is actually dependent on the good will and sanity of the judicial branch.
In turn, the president checks the judiciary through the power of appointment, which can be used to change the direction of the federal courts
If the president and congress are being attacked by the judicial power, this isn't of much help as its effects are very long term. On the other hand, a president can single-handedly and arbitrarily use this rule to slowly poison the judicial system such that will turn malicious for the rest of the system after some time.
By passing amendments to the Constitution, Congress can effectively check the decisions of the Supreme Court.
What is written in the Constitution can be easily ignored or misread by the members of the Supreme Court without any way for these interpretations to be checked or reverted. Basically the Supreme Court has absolute power over all things if we consider this.
Congress (considered the branch of government closest to the people) can impeach both members of the executive and judicial branches.
Again (like in the presidential veto rule), this gives the congress unlimited power. There are no checks here -- except of course the fact that the congress is composed by multiple different voting heads of which a majority has to agree for the congress to do anything, which is the only thing preventing overabuse of this rule.
As shown above, most rules that compose the "checks and balances" system can be abused and if given enough time they will. They aren't real checks.
Ultimately, the stability and decency of a democracy relies on the majority rule (so congress votes are never concentrated in dictatorial measures) and the common sense of the powerful people (president and judges).
There probably hasn't been a single year in any democracy in which one of these powers didn't abused or violated one of the rules, but still in most cases the overall system stays in place because of the general culture, splitted views about most issues, overall common sense and fear of public shame.
The checks and balances system itself is an illusion. All the complex "democracy" construct depends on the goodwill of all the participants and have only worked so far (when it did) by miracle and by the power of human cooperation and love.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Splitpages
The simplest possible service: it splitted PDF pages in half.
Created specially to solve the problem of those scanned books that come with two pages side-by-side as if they were a single page and are much harder to read on Kindle because of that.
It required me to learn about Heroku Buildpacks though, and fork or contribute to a Heroku Buildpack that embedded a mupdf binary.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Idéia de um sistema jurídico centralizado, mas com um pouco de lógica
um processo, é, essencialmente, imagino eu na minha ingenuidade leiga, um apelo que se faz ao juiz para que este reconheça certos fatos como probantes de um certo fenômeno tipificado por uma certa lei.
imagino então o seguinte:
uma petição não é mais um enorme documento escrito numa linguagem nojenta com referências a leis e a evidências factuais espalhadas segundo a (in) capacidade ensaística do advogado, mas apenas um esquema lógico - talvez até um diagrama desenhado (ou talvez quem sabe uma série de instruções compreensíveis por um computador?) - mostrando a ligação entre a lei e os fatos e os pedidos, por exemplo:
- a lei tal diz que ninguém pode vender
- fulano vendeu cigarros
- é prova de que fulano vendeu cigarros ia foto tirada na rua tal no dia tal que mostra fulano vendendo cigarros
- a mesma lei pede que fulano pague uma multa
este exemplo está ainda muito verborrágico, mas é só um exemplo simples. coisas mais complicadas precisariam de outras formas de expressão caso queiramos evitar as longas dissertações jurídicas em voga.
a idéia é que o esquema acima vale por si. um proto-juiz pode julgá-lo como válido ou inválido apenas pela sua lógica interna.
a outra parte do julgamento seria a ligação desse esquema com a realidade externa: anexados à petição viriam as evidências. no caso, anexada ao ponto 3 viria uma foto do fulano. ao ponto 1 também precisa ser anexado o texto da lei referida, mas isto pode ser feito automaticamente pelo número da lei.
uma vez que tenhamos um esquema lógico válido um outro proto-juiz, ou vários outros, pode julgar individualmente cada evidência: ver se o texto da lei confere com a interpretação feita no ponto 1, e se a foto anexada ao ponto 3 é mesmo a foto do réu vendendo cigarro e não a de um urso comendo laranjas.
cada um desses julgamentos pode ser feito sem que o proto-juiz tenha conhecimento do resto das coisas do processo: o primeiro proto-juiz não precisa ver a foto ou a lei, o segundo não precisa ver o esquema lógico ou a foto, o terceiro não precisa ver a lei nem o esquema lógico, e mesmo assim teríamos um julgamento de procedência ou não da petição ao final, o mais impessoal e provavelmente o mais justo possível.
a defesa consistiria em apontar erros no esquema lógico ou falhas no nexo entre a realidade é o esquema. por exemplo:
- uma foto assim não é uma prova de que fulano vendeu, ele podia estar só passando lá perto.
- ele estava de fato só passando lá perto. do que é prova este documento mostrando seu comparecimento a uma aula do curso de direito da UFMG no mesmo horário.
perdoem-me se estiver falando besteira, mas são 5h e estou ainda dormindo. obviamente há vários pontos problemáticos aí, e quero entendê-los, mas a forma geral me parece bem razoável.
o que descrevi acima é uma proposta, digamos, de sistema jurídico que não se diferencia em nada do nosso sistema jurídico atual, exceto na forma (não no sentido escolástico). é também uma tentativa de compreender sua essência.
as vantagens desse formato ao atual são muitas:
- menos papel, coisas pra ler, repetição infinita de citações legais e longuíssimas dissertações escritas por advogados analfabetos que destroem a língua e a inteligência de todos
- diminuição drástica do tempo gasto por cada juiz em cada processo
- diminuição do poder de cada juiz (se cada ato de julgamento humano necessário em cada processo pode ser feito por qualquer juiz, sem conhecimento dos outros aspectos do mesmo processo, tudo é muito mais rápido, e cada julgamento desses pode ser feito por vários juízes diferentes, escolhidos aleatoriamente)
- diminuição da pomposidade de casa juiz: com menos poder e obrigações maus simples, um juiz não precisa ser mais uma pessoa especial que ganha milhões, pode ser uma pessoa comum, um proto-juiz, ganhando menos (o que possibilitaria até ter mais desses e aumentar a confiabilidade de cada julgamento)
- os juízes podem trabalhar da casa deles e a qualquer momento
- passa a ter sentido a existência de um sistema digital de processos (porque é ridículo que o sistema digital atual seja só uma forma de passar documentos do Word de um lado para o outro)
- o fim das audiências de conciliação, que são uma monstruosidade criada apenas pela necessidade de diminuir a quantidade de processos em tramitação e acabam retirandobo sentido da justiça (as partes são levemente pressionadas a ignorar a validade ou não das suas posições e fazer um acordo, sob pena de o juiz ficar com raiva delas depois)
milhares de precauções devem ser tomadas caso um sistema desses vá ser implantado (ahahah), talvez manter uma forma de julgamento tradicional, de corpo presente e com um juiz ou júri que tem conhecimento de toda situação, mas apenas para processos que chegarem até certo ponto, e assim por diante.
Ver também
- P2P reputation thing para um fundamento de um sistema jurídico anárquico.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Boardthreads
This was a very badly done service for turning a Trello list into a helpdesk UI.
Surprisingly, it had more paying users than Websites For Trello, which I was working on simultaneously and dedicating much more time to it.
The Neo4j database I used for this was a very poor choice, it was probably the cause of all the bugs.
-
@ 3bf0c63f:aefa459d
2024-01-14 13:55:28Why I don't like NIP-26 as a solution for key management
NIP-26 was created out of the needs of the Nostr integration at https://minds.com/. They wanted Minds users to be able to associate their "custodial" Nostr key with an external self-owned key. NIP-26 looked like a nice fit for the job, because it would allow supporting clients to associate the two identities statelessly (i.e. by just seeing one event published by Minds but with a delegation tag on it the client would be able to associate that with the self-owned external key without anything else[^1]).
The big selling point of NIP-26 (to me) was that it was fully optional. Clients were free to not implement it and they would not suffer much. They would just see "bob@minds.com" published this, and "bob-self-owned" published that. They would probably know intuitively that these two were the same person, or not, but it wouldn't be an issue. Both would still be identified as Bob and have a picture, a history and so on. Moreover, this wasn't expected to happen a lot, it would be mostly for the small intersection of people that wanted to have their own keys and also happened to be using one of these "custodial Nostr" platforms like Minds.
At some point, though, NIP-26 started to be seen as the solution for key management on Nostr. The idea is that someone will generate a very safe key on a hardware device and guard it as their most precious treasure without it ever touching the internet, and use it just to sign delegation tags. Then use multiple of these delegation tags, one for each different Nostr app, and maybe rotate them every month or so, details are unclear.
This breaks the previous expectations I had for NIP-26 entirely, as now these keys become faceless entities that can't be associated with anything except their "master" key (the one that is in cold storage). So in a world in which most Nostr users are using NIP-26 for everything, clients that do not implement NIP-26 become completely useless, as all they will see is a constant stream of random keys. They won't be able to follow anyone or interact with anyone, as these keys will not identify any concrete person on their back, they will vanish all the time and new keys will show up and the world will be chaotic. So now every client must implement NIP-26 to become usable at all, it is not optional anymore.
You may argue that making NIP-26 a de facto mandatory NIP isn't a bad thing and is worth the cost, but I think it breaks a lot of the simplicity of the protocol. It would probably be worth the cost if we knew NIP-26 was an actual complete solution, but it definitely is not, it is partial, and not the most elegant thing in the world. I think key management can be solved in multiple different ways that can all work together or not, but most importantly they can all remain optional.
More thoughts on these multiple ways can be found at Thoughts on Nostr key management.
If I am wrong about all this and we really come to the conclusion that we need a de facto mandatory key delegation method for Nostr, so be it -- but in that case, considering that we will break backwards-compatibility anyway, I think there might be a better design than NIP-26, more optimized and easier to implement, I don't know how exactly. But I really think we shouldn't rush that.
[^1]: as opposed to other suggestions that would also work, but that would require dealing with multiple events -- for example, the external user could publish a new replaceable event -- or use
kind:0
-- to say they wanted to grandfather the Minds key into their umbrella, while the Minds key would also need to signal its acceptance of that. This also had the problem of requiring changes every time a new replaceable event of such kind was found. Although I am unsure now, at the time me and William agreed this was worse than NIP-26 with the delegation tag.