-
This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
## The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
## Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
## Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
## The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
## Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
yoyoaa
-
收录的内容中 kind=1的部分,实话说 质量不高。
所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。
而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。
我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
*Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my [free to read/share attempt to write a novel](https://untitlednovel.dns7.top/contents/), but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.*
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
![](https://untitlednovel.dns7.top/img/mapgen/01.avif)
and after a number of outputs, selected this one to work on:
![](https://untitlednovel.dns7.top/img/mapgen/02.avif)
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
![](https://untitlednovel.dns7.top/img/mapgen/03.avif)
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
![](https://untitlednovel.dns7.top/img/mapgen/04.avif)
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
![](https://untitlednovel.dns7.top/img/mapgen/05.avif)
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
![](https://untitlednovel.dns7.top/img/map.avif)
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
#### The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
## The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
## Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
* **Enhance LLM Outputs:** By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
* **Support Complex Queries:** Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
## Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
* **Richness of Entities:** Effective retrieval depends on graphs populated with rich, detailed entities.
* **Freedom from Hallucinations:** Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
## Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
* Handle diverse data sources with ease.
* Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
## Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
* **Defining Relationships:** Clarifying how concepts connect within a domain.
* **Validating Graph Structures:** Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
## How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
1. **Assign a Knowledge Champion:** Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
2. **Provide Training:** Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
3. **Leverage Partnerships:** Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
4. **Utilize Tools:** Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
5. **Iterate with Feedback:** Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
## How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
1. **Leverage Existing Ontologies:** Many industries and domains already have open ontologies. For instance:
* **Public Knowledge Graphs:** Resources like Wikipedia’s graph offer a wealth of structured knowledge.
* **Industry Standards:** Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
* **Business Framework Ontology (BFO):** A valuable resource for enterprises looking to define business processes and structures.
1. **Build In-House Expertise:** If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
2. **Utilize LLMs for Ontology Construction:** Interestingly, LLMs themselves can act as a starting point for ontology development:
* **Prompt-Based Extraction:** LLMs can generate draft ontologies by leveraging their extensive training on graph data.
* **Domain Expert Refinement:** Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
## Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
* **Detecting Hallucinations:** Differentiating between genuine insights and AI-generated inaccuracies.
* **Ensuring Completeness:** Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
## LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
* **Generate Skeleton Ontologies:** Prompt LLMs with domain-specific information to draft initial ontology structures.
* **Validate and Refine Ontologies:** Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
## Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
## The Four-Layer Framework
### Layer 1: Zoom Out
![](http://hedgedoc.malin.onl/uploads/bf583a95-79b0-4efe-a194-d6a8b80d6f8a.png)
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
**Example**: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- **Tip**: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
_Once you have the big picture in mind, it’s time to start breaking it down._
---
### Layer 2: Categorize and Connect
![](http://hedgedoc.malin.onl/uploads/5c413063-fddd-48f9-a65b-2cd374340613.png)
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
**Example**: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- **Tip**: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
_With your categories in place, you’re ready to dive into the details that bring them to life._
---
### Layer 3: Master the Details
![](http://hedgedoc.malin.onl/uploads/55ad1e7e-a28a-42f2-8acb-1d3aaadca251.png)
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
**Example**: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- **Tip**: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
_Now that you’ve mastered the essentials, you can expand your knowledge to include extra material._
---
### Layer 4: Expand Your Horizons
![](http://hedgedoc.malin.onl/uploads/7ede6389-b429-454d-b68a-8bae607fc7d7.png)
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
**Example**: Learn about rare programming quirks or historical trivia about a language’s development.
- **Tip**: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
---
## Pro Tips for Better Learning
### 1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
### 2. Map It Out
Create visual aids like [diagrams or concept maps](https://excalidraw.com/) to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
### 3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching **exposes any gaps** in your understanding and **helps reinforce** the material.
### 4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to **explore your topic** in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.
**Tip**: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
---
## Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
Today I learned how to install [NVapi](https://github.com/sammcj/NVApi) to monitor my GPUs in Home Assistant.
![](https://image.nostr.build/82b86710ef613f285452f4bb6e2a30a16e722db04ec297279c5b476e0c13d9f4.png)
**NVApi** is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- **GPU Utilization Monitoring**: Utilization, memory usage, temperature, fan speed, and power consumption.
- **Automated Power Limiting**: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- **Cross-GPU Coordination**: Total power budget applies across multiple GPUs in the same system.
- **PCIe Link Speed Management**: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- **Home Assistant Integration**: Uses the built-in RESTful platform and template sensors.
## Getting the Data
```
sudo apt install golang-go
git clone https://github.com/sammcj/NVApi.git
cd NVapi
go run main.go -port 9999 -rate 1
curl http://localhost:9999/gpu
```
Response for a single GPU:
```
[
{
"index": 0,
"name": "NVIDIA GeForce RTX 4090",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 16,
"power_limit_watts": 450,
"memory_total_gb": 23.99,
"memory_used_gb": 0.46,
"memory_free_gb": 23.52,
"memory_usage_percent": 2,
"temperature": 38,
"processes": [],
"pcie_link_state": "not managed"
}
]
```
Response for multiple GPUs:
```
[
{
"index": 0,
"name": "NVIDIA GeForce RTX 3090",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 14,
"power_limit_watts": 350,
"memory_total_gb": 24,
"memory_used_gb": 0.43,
"memory_free_gb": 23.57,
"memory_usage_percent": 2,
"temperature": 36,
"processes": [],
"pcie_link_state": "not managed"
},
{
"index": 1,
"name": "NVIDIA RTX A4000",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 10,
"power_limit_watts": 140,
"memory_total_gb": 15.99,
"memory_used_gb": 0.56,
"memory_free_gb": 15.43,
"memory_usage_percent": 3,
"temperature": 41,
"processes": [],
"pcie_link_state": "not managed"
}
]
```
# Start at Boot
Create `/etc/systemd/system/nvapi.service`:
```
[Unit]
Description=Run NVapi
After=network.target
[Service]
Type=simple
Environment="GOPATH=/home/ansible/go"
WorkingDirectory=/home/ansible/NVapi
ExecStart=/usr/bin/go run main.go -port 9999 -rate 1
Restart=always
User=ansible
# Environment="GPU_TEMP_CHECK_INTERVAL=5"
# Environment="GPU_TOTAL_POWER_CAP=400"
# Environment="GPU_0_LOW_TEMP=40"
# Environment="GPU_0_MEDIUM_TEMP=70"
# Environment="GPU_0_LOW_TEMP_LIMIT=135"
# Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
# Environment="GPU_0_HIGH_TEMP_LIMIT=100"
# Environment="GPU_1_LOW_TEMP=45"
# Environment="GPU_1_MEDIUM_TEMP=75"
# Environment="GPU_1_LOW_TEMP_LIMIT=140"
# Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
# Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install]
WantedBy=multi-user.target
```
## Home Assistant
Add to Home Assistant `configuration.yaml` and restart HA (completely).
For a single GPU, this works:
```
sensor:
- platform: rest
name: MYPC GPU Information
resource: http://mypc:9999
method: GET
headers:
Content-Type: application/json
value_template: "{{ value_json[0].index }}"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
scan_interval: 1 # seconds
- platform: template
sensors:
mypc_gpu_0_gpu:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_memory:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_power:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_power_limit:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_temperature:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}"
unit_of_measurement: "°C"
```
For multiple GPUs:
```
rest:
scan_interval: 1
resource: http://mypc:9999
sensor:
- name: "MYPC GPU0 Information"
value_template: "{{ value_json[0].index }}"
json_attributes_path: "$.0"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
- name: "MYPC GPU1 Information"
value_template: "{{ value_json[1].index }}"
json_attributes_path: "$.1"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
- platform: template
sensors:
mypc_gpu_0_gpu:
friendly_name: "MYPC GPU0 GPU"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_memory:
friendly_name: "MYPC GPU0 Memory"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_power:
friendly_name: "MYPC GPU0 Power"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_power_limit:
friendly_name: "MYPC GPU0 Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_temperature:
friendly_name: "MYPC GPU0 Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}"
unit_of_measurement: "C"
- platform: template
sensors:
mypc_gpu_1_gpu:
friendly_name: "MYPC GPU1 GPU"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_1_memory:
friendly_name: "MYPC GPU1 Memory"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_1_power:
friendly_name: "MYPC GPU1 Power"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_1_power_limit:
friendly_name: "MYPC GPU1 Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_1_temperature:
friendly_name: "MYPC GPU1 Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}"
unit_of_measurement: "C"
```
Basic entity card:
```
type: entities
entities:
- entity: sensor.mypc_gpu_0_gpu
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_memory
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_power
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_power_limit
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_temperature
secondary_info: last-updated
```
# Ansible Role
```
---
- name: install go
become: true
package:
name: golang-go
state: present
- name: git clone
git:
repo: "https://github.com/sammcj/NVApi.git"
dest: "/home/ansible/NVapi"
update: yes
force: true
# go run main.go -port 9999 -rate 1
- name: install systemd service
become: true
copy:
src: nvapi.service
dest: /etc/systemd/system/nvapi.service
- name: Reload systemd daemons, enable, and restart nvapi
become: true
systemd:
name: nvapi
daemon_reload: yes
enabled: yes
state: restarted
```
-
Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
### Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
### Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
- **Scoperta.** Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
- **Efficienza.** La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
### Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
### Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
### Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
### Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
### Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
### Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\
https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
## MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.
## Graph Basics
- A standard **graph** has a set of **nodes** (or vertices) and **edges** (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
## Hypergraph
- A **hypergraph** extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a **hyperedge**, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
## Metagraph
- A **metagraph** allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
## Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.
![](https://miro.medium.com/v2/resize:fit:1400/1*t2TLvy8pYmmUnLJUUwwvDQ.png)
The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to you
![](https://miro.medium.com/v2/resize:fit:1088/1*agDR_q80JJfxjGyj1bFBqg.png)
Nodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
## Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
## Directed Metagraph where edges are not used as nodes and could not contain subgraphs
![](https://miro.medium.com/v2/resize:fit:1400/1*xAVf4LeuMHhXynqrwfkNWA.png)
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
## Directed Metagraph where edges are not used as nodes and could contain subgraphs
![](https://miro.medium.com/v2/resize:fit:1400/1*Ra5_LtYGlbTidGn3w8gYEg.png)
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.
## Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
![](https://miro.medium.com/v2/resize:fit:1400/1*1jggQlCU-aYO_wOb2q6EXA.png)
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.
Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose.
-
Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
## The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
## Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like **Lisp** emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
## Lisp: The Earlier AI-Language
**Lisp**, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
## Prolog: Formal Logic and Deductive Reasoning
In the 1970s, **Prolog** was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out **how** to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
## Solvers and Their Role in Complementing LLMs
One of the most powerful features of **Prolog** and similar logic-based systems is their use of **solvers**. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a **solver** can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling **constraint satisfaction problems** — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating **neurosymbolic methods** that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
## Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a **graph of rules**. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a **graph of rules**, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable **modular reasoning**, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of **neural fluency** with **rule-based structure** gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform **deductive reasoning** alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
## Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where **neurosymbolic approaches** shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
**Personal Knowledge Graphs (PKGs)** come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a **graph of rules** that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using **deductive databases** to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
## My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling **knowledge graphs using deductive databases**, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
**Prolog** and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a **graph of rules** that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
## Why It Matters for LLM Employment
Using **neurosymbolic RAG (retrieval-augmented generation)** in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of **neural fluency** with **symbolic accuracy and deductive power** is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
## Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
## Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
## RDF and Advanced Topics
## Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
## Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
## Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
## Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
## Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
## Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
## Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
The temporal semantics and **temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory** and **semantic memory**. And we also have a lot of theories, right?
## Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
> do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
## Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
## Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a **hybrid clock** or some **global clock** that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
## Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
## Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
> All credit to Guns Magazine. Read the full issue here ⬇️
[February 1970 PDF
](https://gunsmagazine.com/wp-content/uploads/2020/03/G0270.pdf)
---
![1](https://image.nostr.build/3333534147c5cf8f8e0f3ca2d1868bcabf9e6adfb8b9fbee141e5253fc55ab1d.png)
![2](https://image.nostr.build/20f9f79b7be3ac02a39ff944eecfee88cffe0cba27f25c3768beea4367e0ed7a.png)
![3](https://image.nostr.build/6fe5a415fd8cf9814fd81cb84a79734b5935cbfcbdb7fb7685945a61e3c83e11.png)
![4](https://image.nostr.build/68d27f84844793d1e5de01e5bdd5600b8a63df5a1a56e3b9d03950bc0de53c00.png)
![5](https://image.nostr.build/3d4c7d4d868445dc6f8239aba15a48a1eb7d1f5466613f5ececf03d449ff097d.png)
![6](https://image.nostr.build/9e66933281348a98670107b08e9e2bd1cbc35d6e100e410f38f50e3e4c525e84.png)
![7](https://image.nostr.build/6221c7ce945f299544d6b9d6deef062fe7ba10989541008bc7fdd4d5b0367223.png)
![8](https://image.nostr.build/e06c087594be2781489b4d97bbb3e842e3ffe58898cacfd3272bf1552e5e54ff.png)
![9](https://image.nostr.build/18f73db317110fa0a2b31afcbece18e2557fa0a7fe5f8939bf36989d9ec56e79.png)
![10](https://image.nostr.build/603bd401a2ad39e832a0e5d3787edb81946030b67b76dd109724611f400945eb.png)
![11](https://image.nostr.build/3ba5e1f59a875d5e5cd0123711b2eafb95b71f709692c35502d491467891b9df.png)
![12](https://image.nostr.build/55e2e4914868f3ed22a9d7ce2c510089b97af39057e9ba8ceae6bef6a93c5921.png)
![13](https://image.nostr.build/c3efd19ee1911b1f2668be3e8613a3bb66e0d7dc8281fb565d200c57f3c9956c.png)
![14](https://image.nostr.build/2e52d7db5e3c5cedc5e6b410dae683acd2dcaffc1a08d537804d0d6e62cd14b9.png)
![15](https://image.nostr.build/a7063f57fdab360c70ee184b0a50b38415fd2cd6fc9be8d97428ee78cc9082f4.png)
![16](https://image.nostr.build/d619130de894035a3beaf67699ca4c1378eb85a4583ddb74397113759f4970aa.png)
![17](https://image.nostr.build/93a23c91fdeb34ebad135040c6c34597fcd6923810768f993b9efff40d7694ea.png)
![18](https://image.nostr.build/0a7007fd882bbadc00bc837c05ffe8a024584b4f70e097b65d014ab4e4cd6240.png)
![19](https://image.nostr.build/3671fa449bc5072e87c8e5bc52109138e4ac07a2626305d88c85cb61ef43c7c2.png)
![20](https://image.nostr.build/816799371bc375dc43a98ff78a573dfe4156c5d854db4487c0b20dc2c9fbb8c4.png)
---
### 📰 Past Magazine Mondays 📰
[001: May 1963](nostr:note1r5ve5en9tyv38hathy2twhm9h4dn7tq7fgradzkazskxyxtckysqeqxyzm)
[002: August 1969](nostr:note1zkeur68w9h8ljswp4a4xc45exfv725v6vudqdhyukqz6kz37vdaq097f9z)
---
### ⬇️ Follow 1776 HODL ⬇️
[Website](https://1776.npub.pro)
[Nostr](nostr:npub1wugsgfcs7edz70qtc56khmxv7js90mp2hwrfu46vkk4fda3puywq3xaz5a)
-
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The [reporting](https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/) purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
> In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
>
> The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
> In a June paper reviewed by[ Reuters](https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/), six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
>
> The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the *Star Trek* episode “[A Taste of Armageddon](https://en.wikipedia.org/wiki/A_Taste_of_Armageddon)“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over [one million open-source LLMs](https://huggingface.co/models) available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are [potentially](https://bigthink.com/business/the-trillion-dollar-ai-race-to-create-digital-god/) trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American[ infrastructure](https://www.nbcnews.com/tech/security/chinese-hackers-cisa-cyber-5-years-us-infrastructure-attack-rcna137706),[ data](https://www.cnn.com/2024/10/05/politics/chinese-hackers-us-telecoms/index.html), and yes, [your credit history](https://thespectator.com/topic/chinese-communist-party-credit-history-equifax/)?
**As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much[ many times](https://consumerchoicecenter.org/made-in-china-sold-in-china/).**
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
## **Open-source matters**
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various [licensing schemes](https://opensource.org/licenses) – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also [open-source](https://github.com/bitcoin), which has allowed [thousands](https://bitcoinmagazine.com/business/bitcoin-is-money-for-enemies) of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
## **Open-source is for your friends, and enemies**
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to[ Hugging Face](https://huggingface.co/), download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put [export controls](https://www.reuters.com/technology/us-lawmakers-unveil-bill-make-it-easier-restrict-exports-ai-models-2024-05-10/) on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see[ The Crypto Wars](https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States)). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
## **Limiting open-source threatens our own advancement**
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “[Hundred Year Marathon](https://www.amazon.com/Hundred-Year-Marathon-Strategy-Replace-Superpower/dp/1250081343)” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with [trillions of dollars’](https://www.economist.com/china/2024/11/04/in-some-areas-of-military-strength-china-has-surpassed-america) worth of investments that span far beyond AI chatbots and skip logic protocols.
The [theft](https://www.technologyreview.com/2023/06/20/1075088/chinese-amazon-seller-counterfeit-lawsuit/) of intellectual property at factories in Shenzhen, or in US courts by [third-party litigation funding](https://nationalinterest.org/blog/techland/litigation-finance-exposes-our-judicial-system-foreign-exploitation-210207) coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
*Originally published on the website of the [Consumer Choice Center](https://consumerchoicecenter.org/open-source-is-for-everyone-even-your-adversaries/).*
-
> ### 第三方API合集:
---
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 |
| --- | --- | --- | --- |
| AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | [官网](https://aihubmix.com?aff=mPS7) |
| OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | [官网](https://openai-hk.com/?i=45878) |
| CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | [官网](https://www.closeai-asia.com/) |
| OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | [官网](https://www.openai-sb.com/) |
` 持续更新。。。`
---
### 推广:
访问不了openai,去`低调云`购买VPN。
官网:https://didiaocloud.xyz
邀请码:`w9AjVJit`
价格低至1元。
-
# Anglicismos estúpidos no português contemporâneo
Palavras e expressões que ninguém deveria usar porque não têm o sentido que as pessoas acham que têm, são apenas aportuguesamentos de palavras inglesas que por nuances da história têm um sentido ligeiramente diferente em inglês.
Cada erro é acompanhado também de uma sugestão de como corrigi-lo.
### Palavras que existem em português com sentido diferente
- _submissão_ (de trabalhos): **envio**, **apresentação**
- _disrupção_: **perturbação**
- _assumir_: **considerar**, **pressupor**, **presumir**
- _realizar_: **perceber**
- _endereçar_: **tratar de**
- _suporte_ (ao cliente): **atendimento**
- _suportar_ (uma idéia, um projeto): **apoiar**, **financiar**
- _suportar_ (uma função, recurso, característica): **oferecer**, **ser compatível com**
- _literacia_: **instrução**, **alfabetização**
- _convoluto_: **complicado**.
- _acurácia_: **precisão**.
- _resiliência_: **resistência**.
### Aportuguesamentos desnecessários
- _estartar_: **iniciar**, **começar**
- _treidar_: **negociar**, **especular**
### Expressões
- _"não é sobre..."_: **"não se trata de..."**
---
![halloween é o cacete, viva a cultura nacional!](/static/halloween.png)
## Ver também
- [Algumas expressões e ditados excelentes da língua portuguesa, e outras não tão excelentes assim](https://fiatjaf.alhur.es/expressões-e-ditados.txt)
-
> I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released [The AI Index 2024 Annual Report](https://aiindex.stanford.edu/report/).
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
# Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
# A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
# Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
# Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
![image](https://i.nostr.build/GUh5M4GXumaJVGZA.jpg)
# Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
# Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
**Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.**
---
Full citation: *Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.*
The AI Index 2024 Annual Report by Stanford University is licensed under [Attribution-NoDerivatives 4.0 International](https://creativecommons.org/licenses/by-nd/4.0/?ref=chooser-v1).
-
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148821549-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010191703)
**สวัสดีทุกคนบน Nostr ครับ** รวมไปถึง **watchers**และ **ผู้ติดตาม**ของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
**เมื่อวันที่ 16 กย.2024** มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148088676-YAKIHONNES3.PNG)
### นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148348755-YAKIHONNES3.jpg)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint **0.15 - 0.2 ETH** … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148387032-YAKIHONNES3.jpg)
- Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
1. ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
2. ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
3. ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
4. ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT **0.2 ETH**
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
**Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก** เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea
แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง
ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ
ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ
เขา : 2ETH นี่มัน 5000$ เลยนะ
ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้
เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู
ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย
เขา : จะเอา address เขาไปทำไม
ผม : ตัดจบ รำคาญ ไม่ขายให้ละ
เขา : 2ETH = 5000 USD เลยนะ
ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
-----------
### ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148837871-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010196295)
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH **"code is law"** smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
## Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ
- Anime girl fanarts : [HikariHarmony](https://linktr.ee/hikariharmonypatreon)
- [HikariHarmony on Nostr](https://shorturl.at/I8Nu4)
- General art : [KeshikiRakuen](https://linktr.ee/keshikirakuen)
- KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148821549-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010191703)
**Hello everyone on Nostr** and all my **watchers**and **followers**from DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
**On September 16**, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148088676-YAKIHONNES3.PNG)
### Here’s a brief overview of what happened
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148348755-YAKIHONNES3.jpg)
- The first scammer selected the art they wanted to buy and offered a high price for each piece.
They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later.
The minting gas fees were quite expensive, ranging from **0.15 to 0.2 ETH**
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148387032-YAKIHONNES3.jpg)
- The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose.
Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
1. I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
2. I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
3. I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
4. I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of **0.2 ETH**. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
**This raised a red flag**, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms
If you find yourself in a similar situation, be aware that scammers may be targeting you.
-----------
### My Perspective why I Believe This is a Scam and What the Scammers Gain
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148837871-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010196295)
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time
Take care
## Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery
- Anime girl fanarts : [HikariHarmony](https://linktr.ee/hikariharmonypatreon)
- [HikariHarmony on Nostr](https://shorturl.at/I8Nu4)
- General art : [KeshikiRakuen](https://linktr.ee/keshikirakuen)
-
Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
<iframe width="560" height="315" src="https://www.youtube.com/embed/fehgwnSUcqQ?si=nPK7UOFr19BT5ifm" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
### 1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
### 2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
### 3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
### Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
### Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
<iframe width="560" height="315" src="https://www.youtube.com/embed/BkNqu51et9U?si=lne0jWxdrZPxSgd1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
## The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
## LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
```
create table if not exists conversation (
id varchar(36) primary key not null,
startDate real,
endDate real,
summary text,
vectorSummary F32_BLOB(512)
);
```
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
```
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance
from conversation
where
${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''}
${distance ? `and distance <= ${distance}` : ''}
order by distance
limit ${top};
```
**vector\_distance\_cos** calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
```
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
```
**vector\_top\_k** is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
## Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is **post-filtering**. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
![](https://miro.medium.com/v2/resize:fit:1400/1*WFR7oJIwYflxiUivSrm-5g.png)
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
```
const sqlQuery = `
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance
from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
inner join conversation c on i.id = c.rowid
where
${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''}
${distance ? `and distance <= ${distance}` : ''}
order by distance
limit ${top};
```
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector\_top\_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
## Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
![](https://miro.medium.com/v2/resize:fit:1400/1*TioR1ETZ2S_FxvW6rZx9nw.png)
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
## Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
```
WITH FilteredDates AS (
SELECT
c.id,
c.startDate,
c.endDate,
c.summary,
c.vectorSummary
FROM
YourTable c
WHERE
${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''}
),
DistanceCalculation AS (
SELECT
fd.id,
fd.startDate,
fd.endDate,
fd.summary,
fd.vectorSummary,
vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance
FROM
FilteredDates fd
)
SELECT
dc.id,
dc.startDate,
dc.endDate,
dc.summary,
dc.distance
FROM
DistanceCalculation dc
WHERE
1=1
${distance ? `AND dc.distance <= ${distance}` : ''}
ORDER BY
dc.distance
LIMIT ${top};
```
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
![](https://miro.medium.com/v2/resize:fit:1400/1*NewbEonzeEgS7qPOZRgXwg.png)
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
## Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
## **Conclusion**
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
# How to do curation and businesses on Nostr
Suppose you want to start a Nostr business.
You might be tempted to make a closed platform that reuses Nostr identities and grabs (some) content from the external Nostr network, only to imprison it inside your thing -- and then you're going to run an amazing AI-powered algorithm on that content and "surface" only the best stuff and people will flock to your app.
This will be specially good if you're going after one of the many unexplored niches of Nostr in which reading immediately from people you know doesn't work as you generally want to discover new things from the outer world, such as:
- food recipe sharing;
- sharing of long articles about varying topics;
- markets for used goods;
- freelancer work and job offers;
- specific in-game lobbies and matchmaking;
- directories of accredited professionals;
- sharing of original music, drawings and other artistic creations;
- restaurant recommendations
- and so on.
But that is not the correct approach and damages the freedom and interoperability of Nostr, posing a centralization threat to the protocol. Even if it "works" and your business is incredibly successful it will just enshrine you as the head of a _platform_ that controls users and thus is prone to all the bad things that happen to all these platforms. Your company will start to display ads and shape the public discourse, you'll need a big legal team, the FBI will talk to you, advertisers will play a big role and so on.
If you are interested in Nostr today that must be because you appreciate the fact that it is not owned by any companies, so it's safe to assume you don't want to be that company that owns it. **So what should you do instead?** Here's an idea in two steps:
1. **Write a Nostr client tailored to the niche you want to cover**
If it's a music sharing thing, then the client will have a way to play the audio and so on; if it's a restaurant sharing it will have maps with the locations of the restaurants or whatever, you get the idea. Hopefully there will be a NIP or a NUD specifying how to create and interact with events relating to this niche, or you will write or contribute with the creation of one, because without interoperability this can't be Nostr.
The client should work independently of any special backend requirements and ideally be open-source. It should have a way for users to configure to which relays they want to connect to see "global" content -- i.e., they might want to connect to `wss://nostr.chrysalisrecords.com/` to see only the latest music releases accredited by that label or to `wss://nostr.indiemusic.com/` to get music from independent producers from that community.
2. **Run a relay that does all the magic**
This is where your value-adding capabilities come into play: if you have that magic sauce you should be able to apply it here. Your service -- let's call it `wss://magicsaucemusic.com/` -- will charge people or do some KYM (know your music) validation or use some very advanced AI sorcery to filter out the spam and the garbage and display the best content to your users who will request the global feed from it (`["REQ", "_", {}]`), and this will cause people to want to publish to your relay while others will want to read from it.
You set your relay as the default option in the client and let things happen. Your relay is like your "website" and people are free to connect to it or not. You don't own the network, you're just competing against other websites on a leveled playing field, so you're not responsible for it. Users get seamless browsing across multiple websites, unified identities, a unified interface (that could be different in a different client) and social interaction capabilities that work in the same way for all, and **they do not depend on you, therefore they're more likely to trust you**.
---
Does this centralize the network still? But this a simple and easy way to go about the matter and scales well in all aspects.
Besides allowing users to connect to specific relays for getting a feed of curated content, such clients should also do all kinds of "social" (i.e. following, commenting etc) activities (if they choose to do that) using the outbox model -- i.e. if I find a musician I like under `wss://magicsaucemusic.com` and I decide to follow them I should keep getting updates from them even if they get banned from that relay and start publishing on `wss://nos.lol` or `wss://relay.damus.io` or whatever relay that doesn't even know anything about music.
The hardcoded defaults and manual typing of relay URLs can be annoying. But I think it works well at the current stage of Nostr development. Soon, though, we can create events that recommend other relays or share relay lists specific to each kind of activity so users can get in-app suggestions of relays their friends are using to get their music from and so on. That kind of stuff can go a long way.
-
# Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between _clients_ (i.e. the program that runs on a user computer) and _relays_ (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
[https://x.com/parkeralewis/status/1831746160781938947]()
Here's a startling chart from an American staple, Walgreens. The convenience store and pharmacy chain has seen its stock price plummet by more than 67% this year and by more than 90% from its all time high, which was reached in 2015.
The combination of the COVID lock downs and the lax laws around theft that followed were materially detrimental to Walgreens business. The crux of their problem at the moment, however, is a double whammy of those disruptions in their business coupled with the "higher for longer" interest rate policy from the Fed over the last couple of years. As Tuur points out in the tweet above, Walgreens has $34B in debt, which means they definitely have significant interest rate payments they need to make on a monthly basis. Tuur also points out that Walgreens has very little cash compared to their debt obligations. Let's take a look at their balance sheet as of May of this year.
![https://www.tftc.io/content/images/2024/09/Screenshot-2024-09-05-at-3.59.46-PM.png]()
Less than $1B in cash for $34B debt with $67.56B in total liabilities. Even worse, their cash balance was drained by more than 27% over the course of the year between May 2023 and May 2024. As you may notice their total assets fell by more than 15% over the same period. This is because Walgreens understands the dire financial straits it finds itself in and has begun shutting down thousands of their locations across the country.
![https://www.tftc.io/content/images/2024/09/Screenshot-2024-09-05-at-4.49.58-PM.png]()
The recent efforts of Walgreens to sell off their assets to raise cash to pay down their debts seem to be completely ineffective as their cash balance is falling faster than their total assets, which is falling 7x faster than their total liabilities. These numbers are most definitely going to get worse as cascading sell pressure in commercial real estate markets (which is the bucket that Walgreens locations fall into) drive down the value of their assets. Leaving them with less cash to pay down their debts as time goes on.
To make matters worse, it puts the institutions that lent money to Walgreens in a terrible position. How many commercial and investment banks has Walgreens tapped to fund their operations with expensive debt? How exposed to Walgreens is any individual lender? Could a default on some or all of their loans catch these financial institutions off sides? If it isn't Walgreens that pushes them off sides, how many more bad borrowers would it take to push them off sides?
As our good friend Parker Lewis points out, the only way the hemorrhaging can be stopped is if the Federal Reserve and Federal Government step in with bail outs in the form of massive liquidity injections via quantitative easing and other emergency measures. On top of this, the Fed and the Federal Government find themselves in a classic catch-22. If they let Walgreens fall into bankruptcy it could set off a domino effect that could exacerbate inflation. Riteaid, a similar retail convenience store and pharmacy chain, filed for bankruptcy last October and is still wading its way through that process. Part of that process has been shuttering many of their storefronts. One has to imagine that since Walgreens and Riteaid are having these problems, some of their other competitors must be feeling the pain as well. If enough of these convenience stores, which tens of millions of Americans depend on for everyday goods, find themselves in a position where they have to shut down their stores it could lead to a supply crunch. People will obviously not be able to get their goods from Riteaid or Walgreens and will flee to alternatives, exacerbating the stress on their supplies, which will drive prices higher.
This is a catch-22 because the only way to avoid this mad dash for consumer goods in the midst of a convenience store Armageddon is to re-introduce ZIRP and flood the market with freshly printed dollars, which will drive prices up as well.
Talk about a rock and a hard place. You better get yourself some bitcoin.
---
Final thought...
Zach Bryan radio crushes.
-
After months of development I am excited to officially announce the first version of DVMDash (v0.1). DVMDash is a monitoring and debugging tool for all Data Vending Machine (DVM) activity on Nostr. The website is live at [https://dvmdash.live](https://dvmdash.live) and the code is available on [Github](https://github.com/dtdannen/dvmdash).
Data Vending Machines ([NIP-90](https://github.com/nostr-protocol/nips/blob/master/90.md)) offload computationally expensive tasks from relays and clients in a decentralized, free-market manner. They are especially useful for AI tools, algorithmic processing of user’s feeds, and many other use cases.
The long term goal of DVMDash is to become 1) a place to easily see what’s happening in the DVM ecosystem with metrics and graphs, and 2) provide real-time tools to help developers monitor, debug, and improve their DVMs.
DVMDash aims to enable users to answer these types of questions at a glance:
* What’s the most popular DVM right now?
* How much money is being paid to image generation DVMs?
* Is any DVM down at the moment? When was the last time that DVM completed a task?
* Have any DVMs failed to deliver after accepting payment? Did they refund that payment?
* How long does it take this DVM to respond?
* For task X, what’s the average amount of time it takes for a DVM to complete the task?
* … and more
For developers working with DVMs there is now a visual, graph based tool that shows DVM-chain activity. DVMs have already started calling other DVMs to assist with work. Soon, we will have humans in the loop monitoring DVM activity, or completing tasks themselves. The activity trace of which DVM is being called as part of a sub-task from another DVM will become complicated, especially because these decisions will be made at run-time and are not known ahead of time. Building a tool to help users and developers understand where a DVM is in this activity trace, whether it’s gotten stuck or is just taking a long time, will be invaluable. *For now, the website only shows 1 step of a dvm chain from a user's request.*
One of the main designs for the site is that it is highly _clickable_, meaning whenever you see a DVM, Kind, User, or Event ID, you can click it and open that up in a new page to inspect it.
Another aspect of this website is that it should be fast. If you submit a DVM request, you should see it in DVMDash within seconds, as well as events from DVMs interacting with your request. I have attempted to obtain DVM events from relays as quickly as possible and compute metrics over them within seconds.
This project makes use of a nosql database and graph database, currently set to use mongo db and neo4j, for which there are free, community versions that can be run locally.
Finally, I’m grateful to nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f for supporting this project.
## Features in v0.1:
### Global Network Metrics:
This page shows the following metrics:
- **DVM Requests:** Number of unencrypted DVM requests (kind 5000-5999)
- **DVM Results:** Number of unencrypted DVM results (kind 6000-6999)
- **DVM Request Kinds Seen:** Number of unique kinds in the Kind range 5000-5999 (except for known non-DVM kinds 5666 and 5969)
- **DVM Result Kinds Seen:** Number of unique kinds in the Kind range 6000-6999 (except for known non-DVM kinds 6666 and 6969)
- **DVM Pub Keys Seen:** Number of unique pub keys that have written a kind 6000-6999 (except for known non-DVM kinds) or have published a kind 31990 event that specifies a ‘k’ tag value between 5000-5999
- **DVM Profiles (NIP-89) Seen**: Number of 31990 that have a ‘k’ tag value for kind 5000-5999
- **Most Popular DVM**: The DVM that has produced the most result events (kind 6000-6999)
- **Most Popular Kind**: The Kind in range 5000-5999 that has the most requests by users.
- **24 hr DVM Requests**: Number of kind 5000-5999 events created in the last 24 hrs
- **24 hr DVM Results**: Number of kind 6000-6999 events created in the last 24 hours
- **1 week DVM Requests**: Number of kind 5000-5999 events created in the last week
- **1 week DVM Results**: Number of kind 6000-6999 events created in the last week
- **Unique Users of DVMs**: Number of unique pubkeys of kind 5000-5999 events
- **Total Sats Paid to DVMs**:
- This is an estimate.
- This value is likely a lower bound as it does not take into consideration subscriptions paid to DVMs
- This is calculated by counting the values of all invoices where:
- A DVM published a kind 7000 event requesting payment and containing an invoice
- The DVM later provided a DVM Result for the same job for which it requested payment.
- The assumption is that the invoice was paid, otherwise the DVM would not have done the work
- Note that because there are multiple ways to pay a DVM such as lightning invoices, ecash, and subscriptions, there is no guaranteed way to know whether a DVM has been paid. Additionally, there is no way to know that a DVM completed the job because some DVMs may not publish a final result event and instead send the user a DM or take some other kind of action.
### Recent Requests:
This page shows the most recent 3 events per kind, sorted by created date. You should always be able to find the last 3 events here of all DVM kinds.
### DVM Browser:
This page will either show a profile of a specific DVM, or when no DVM is given in the url, it will show a table of all DVMs with some high level stats. Users can click on a DVM in the table to load the DVM specific page.
### Kind Browser:
This page will either show data on a specific kind including all DVMs that have performed jobs of that kind, or when no kind is given, it will show a table summarizing activity across all Kinds.
### Debug:
This page shows the graph based visualization of all events, users, and DVMs involved in a single job as well as a table of all events in order from oldest to newest. When no event is given, this page shows the 200 most recent events where the user can click on an event in order to debug that job. The graph-based visualization allows the user to zoom in and out and move around the graph, as well as double click on any node in the graph (except invoices) to open up that event, user, or dvm in a new page.
### Playground:
This page is currently under development and may not work at the moment. If it does work, in the current state you can login with NIP-07 extension and broadcast a 5050 event with some text and then the page will show you events from DVMs. This page will be used to interact with DVMs live. A current good alternative to this feature, for some but not all kinds, is [https://vendata.io/](https://vendata.io/).
## Looking to the Future
I originally built DVMDash out of Fear-of-Missing-Out (FOMO); I wanted to make AI systems that were comprised of DVMs but my day job was taking up a lot of my time. I needed to know when someone was performing a new task or launching a new AI or Nostr tool!
I have a long list of DVMs and Agents I hope to build and I needed DVMDash to help me do it; I hope it helps you achieve your goals with Nostr, DVMs, and even AI. To this end, I wish for this tool to be useful to others, so if you would like a feature, please [submit a git issue here](https://github.com/dtdannen/dvmdash/issues/new) or _note_ me on Nostr!
### Immediate Next Steps:
- Refactoring code and removing code that is no longer used
- Improve documentation to run the project locally
- Adding a metric for number of encrypted requests
- Adding a metric for number of encrypted results
### Long Term Goals:
- Add more metrics based on community feedback
- Add plots showing metrics over time
- Add support for showing a multi-dvm chain in the graph based visualizer
- Add a real-time mode where the pages will auto update (currently the user must refresh the page)
- ... Add support for user requested features!
## Acknowledgements
There are some fantastic people working in the DVM space right now. Thank you to nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet for making python bindings for nostr_sdk and for the recent asyncio upgrades! Thank you to nostr:npub1nxa4tywfz9nqp7z9zp7nr7d4nchhclsf58lcqt5y782rmf2hefjquaa6q8 for answering lots of questions about DVMs and for making the nostrdvm library. Thank you to nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft for making the original DVM NIP and [vendata.io](https://vendata.io/) which I use all the time for testing!
P.S. I rushed to get this out in time for Nostriga 2024; code refactoring will be coming :)
-
Objavte, ako avatari a pseudonymné identity ovplyvňujú riadenie kryptokomunít a decentralizovaných organizácií (DAOs). V tejto prednáške sa zameriame na praktické fungovanie decentralizovaného rozhodovania, vytváranie a správu avatarových profilov, a ich rolu v online reputačných systémoch. Naučíte sa, ako si vytvoriť efektívny pseudonymný profil, zapojiť sa do rôznych krypto projektov a využiť svoje aktivity na zarábanie kryptomien. Preskúmame aj príklady úspešných projektov a stratégie, ktoré vám pomôžu zorientovať sa a uspieť v dynamickom svete decentralizovaných komunít.
-
Jan Kolčák pochádza zo stredného Slovenska a vystupuje pod umeleckým menom Deepologic. Hudbe sa venuje už viac než 10 rokov. Začínal ako DJ, ktorý s obľubou mixoval klubovú hudbu v štýloch deep-tech a afrohouse. Stále ho ťahalo tvoriť vlastnú hudbu, a preto sa začal vzdelávať v oblasti tvorby elektronickej hudby. Nakoniec vydal svoje prvé EP s názvom "Rezonancie". Učenie je pre neho celoživotný proces, a preto sa neustále zdokonaľuje v oblasti zvuku a kompozície, aby jeho skladby boli kvalitné na posluch aj v klube.
V roku 2023 si založil vlastnú značku EarsDeep Records, kde dáva príležitosť začínajúcim producentom. Jeho značku podporujú aj etablované mená slovenskej alternatívnej elektronickej scény. Jeho prioritou je sloboda a neškatulkovanie. Ako sa hovorí v jednej klasickej deephouseovej skladbe: "We are all equal in the house of deep." So slobodou ide ruka v ruke aj láska k novým technológiám, Bitcoinu a schopnosť udržať si v digitálnom svete prehľad, odstup a anonymitu.
V súčasnosti ďalej produkuje vlastnú hudbu, venuje sa DJingu a vedie podcast, kde zverejňuje svoje mixované sety. Na Lunarpunk festivale bude hrať DJ set tvorený vlastnou produkciou, ale aj skladby, ktoré sú blízke jeho srdcu.
[Podcast](https://fountain.fm/show/eYFu6V2SUlN4vC5qBKFk)
[Bandcamp](https://earsdeep.bandcamp.com/)
[Punk Nostr website](https://earsdeep-records.npub.pro/) alebo nprofile1qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qy88wumn8ghj7mn0wvhxcmmv9uq3xamnwvaz7tmsw4e8qmr9wpskwtn9wvhsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qyg8wumn8ghj7mn0wd68ytnddakj7qghwaehxw309aex2mrp0yh8qunfd4skctnwv46z7qpqguvns4ld8k2f3sugel055w7eq8zeewq7mp6w2stpnt6j75z60z3swy7h05
-
Workshop je zameraný pre všetkých, ktorí sa potýkajú s vysvetľovaním Bitcoinu svojej rodine, kamarátom, partnerom alebo kolegom. Pri námietkach z druhej strany väčšinou ideme do protiútoku a snažíme sa vytiahnuť tie najlepšie argumenty. Na tomto workshope vás naučím nový prístup k zvládaniu námietok a vyskúšate si ho aj v praxi. Know-how je aplikovateľné nie len na komunikáciu Bitcoinu ale aj pre zlepšenie vzťahov, pri výchove detí a celkovo pre lepší osobný život.
-
Ak ste v Bitcoine už nejaký ten rok, možno máte pocit, že už všetkému rozumiete a že vás nič neprekvapí. Viete čo je to peňaženka, čo je to seed a čo adresa, možno dokonca aj čo je to sha256. Ste si istí? Táto prednáška sa vám to pokúsi vyvrátiť. 🙂
-
Bojovať s rakovinou metabolickou metódou znamená použiť metabolizmus tela proti rakovine. Riadenie cukru a ketónov v krvi stravou a pohybom, časovanie rôznych typov cvičení, včasná kombinácia klasickej onko-liečby a hladovania. Ktoré vitamíny a suplementy prijímam a ktorým sa napríklad vyhýbam dajúc na rady mojej dietologičky z USA Miriam (ktorá sa špecializuje na rakovinu).
Hovori sa, že čo nemeriame, neriadime ... Ja som meral, veľa a dlho ... aj grafy budú ... aj sranda bude, hádam ... 😉
-
Predikčné trhy predstavujú praktický spôsob, ako môžeme nahliadnuť do budúcnosti bez nutnosti spoliehať sa na tradičné, často nepresné metódy, ako je veštenie z kávových zrniek. V prezentácii sa ponoríme do histórie a vývoja predikčných trhov, a popíšeme aký vplyv mali a majú na dostupnosť a kvalitu informácií pre širokú verejnosť, a ako menia trh s týmito informáciami. Pozrieme sa aj na to, ako tieto trhy umožňujú obyčajným ľuďom prístup k spoľahlivým predpovediam a ako môžu prispieť k lepšiemu rozhodovaniu v rôznych oblastiach života.
-
AI hype vnímame asi všetci okolo nás — už takmer každá appka ponúka nejakú “AI fíčuru”, AI startupy raisujú stovky miliónov a Európa ako obvykle pracuje na regulovaní a našej ochrane pred nebezpečím umelej inteligencie. Pomaly sa ale ukazuje “ovocie” spojenia umelej inteligencie a človeka, kedy mnohí ľudia reportujú signifikantné zvýšenie produktivity v práci ako aj kreatívnych aktivitách (aj napriek tomu, že mnohí hardcore kreatívci by každého pri spomenutí skratky “AI” najradšej upálili). V prvej polovici prednášky sa pozrieme na to, akými rôznymi spôsobmi nám vie byť AI nápomocná, či už v práci alebo osobnom živote.
Umelé neuróny nám už vyskakujú pomaly aj z ovsených vločiek, no to ako sa k nám dostávajú sa veľmi líši. Hlavne v tom, či ich poskytujú firmy v zatvorených alebo open-source modeloch. V druhej polovici prednášky sa pozrieme na boom okolo otvorených AI modelov a ako ich vieme využiť.
-
Čo vznikne keď spojíš hru SNAKE zo starej Nokie 3310 a Bitcoin? - hra [Chain Duel](https://www.youtube.com/watch?v=5hCI2MzxOzE)!
Jedna z najlepších implementácií funkcionality Lightning Networku a gamingu vo svete Bitcoinu.
Vyskúšať si ju môžete s kamošmi [na tomto odkaze](https://game.chainduel.net/). Na stránke nájdeš aj základné pravidlá hry avšak odporúčame pravidlá pochopiť [aj priamo hraním](https://game.chainduel.net/gamemenu)
Chain Duel si získava hromady fanúšikov po bitcoinových konferenciách po celom svete a práve na Lunarpunk festival ho prinesieme tiež.
Multiplayer 1v1 hra, kde nejde o náhodu, ale skill, vás dostane. Poďte si zmerať sily s ďalšími bitcoinermi a vyhrať okrem samotných satoshi rôzne iné ceny.
Príďte sa zúčastniť prvého oficiálneho Chain Duel turnaja na Slovensku!
Pre účasť na turnaji je [potrebná registrácia dopredu](https://docs.google.com/forms/d/e/1FAIpQLScq96a-zM2i9FCkd3W3haNVcdKFTbPkXObNDh4vJwbmADsb0w/viewform).
-
Co se nomádská rodina již 3 roky utíkající před kontrolou naučila o kontrole samotné? Co je to vlastně svoboda? Může koexistovat se strachem? S konfliktem? Zkusme na chvíli zapomenout na daně, policii a stát a pohlédnout na svobodu i mimo hranice společenských ideologií. Zkusme namísto hledání dalších odpovědí zjistit, zda se ještě někde neukrývají nové otázky. Možná to bude trochu ezo.
Karel provozuje již přes 3 roky se svou ženou, dvěmi dětmi a jedním psem minimalistický život v obytné dodávce. Na cestách spolu začali tvořit youtubový kanál "[Karel od Martiny](https://www.youtube.com/@KarelodMartiny)" o svobodě, nomádství, anarchii, rodičovství, drogách a dalších normálních věcech.
Nájdete ho aj [na nostr](nostr:npub1y2se87uxc7fa0aenfqfx5hl9t2u2fjt4sp0tctlcr0efpauqtalqxfvr89).
-
## Bienvenide a Nostr!
**Introduccíon**
Es tu primera vez aqui en Nostr? Bienvenides! Nostr es un acrónimo raro para "Notes and Other Stuff Transmitted by Relays" on un solo objetivo; resistirse a la censura. Una alternativa a las redes sociales tradicionales, comunicaciónes, blogging, streaming, podcasting, y feventualmente el correo electronico (en fase de desarrollo) con características descentralizadas que te capacita, usario. Jamas seras molestado por un anuncio, capturado por una entidad centralizada o algoritmo que te monetiza.
Permítame ser su anfitrión! Soy Onigiri! Yo estoy explorando el mundo de Nostr, un protocolo de comunicacíon decentralizada. Yo escribo sobre las herramientas y los desarolladores increíbles de Nostr que dan vida a esta reino.
![](https://image.nostr.build/130e25ce8e83136e69732b6b37e503541fbd82b7598d58ba64e504e19402d297.jpg)
## Bienvenides a Nostr Wonderland
Estas a punto de entrar a un otro mundo digtal que te hará explotar tu mente de todas las aplicaciones descentralizadas, clientes, sitios que puedes utilizar. Nunca volverás a ver a las comunicaciones ni a las redes sociales de la mesma manera. Todo gracias al carácter criptográfico de nostr, inpirado por la tecnología "blockchain". Cada usario, cuando crean una cuenta en Nostr, recibe un par de llaves: una privada y una publico. Estos son las llaves de tu propio reino. Lo que escribes, cantes, grabes, lo que creas - todo te pertenece.
![](https://image.nostr.build/6f262291ddd72e45350d360f6e45fc9dd38740074de000ff4e48924bb838bf9c.jpg)
### Unos llaves de Oro y Plata
Mi amigo y yo llamamos a esto "identidad mediante cifrado" porque tu identidad es cifrado. Tu puedes compartir tu llave de plata "npub" a otros usarios para conectar y seguir. Utiliza tu llave de oro "nsec" para accedar a tu cuenta y exponerte a muchas aplicaciones. Mantenga la llave a buen recaudo en todo momento. Ya no hay razor para estar enjaulado por los terminos de plataformas sociales nunca más.
Onigirl
```
npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
```
### Todavia No tienes un cliente? Seleccione la mejor opción.
Encuentra la aplicación adecuada para ti! Utilice su clave de oro "nsec" para acceder a estas herramientas maravillosas. También puedes visit a esta pagina a ver a todas las aplicaciones. Antes de pegar tu llave de oro en muchas aplicaciones, considera un "signer" (firmante) para los sitios web 3. Por favor, mire la siguiente imagen para más detalles. Consulte también la leyenda.
![](https://image.nostr.build/0db9d3efe15c3a02d40c3a5e65e7f6a64d50c6738dfe83c37330194d1b85059f.jpg)
### Get a Signer extension via chrome webstore
Un firmante (o "signer" en inglés) es una extensión del navegador web. Nos2x and NostrConnect son extensiónes ampliamente aceptado para aceder a Nostr. Esto simplifica el proceso de aceder a sitios "web 3". En lugar de copiar y pegar la clave oro "nsec" cada vez, la mantienes guardado en la extensión y le des permiso para aceder a Nostr.
![](https://image.nostr.build/ff751710a0e3c808355a8d03721b633da7a8926a16f7e91b6e5d451fecc5d887.jpg)
### 👉⚡⚡Obtén una billetera Bitcoin lightning para enviar/recibir Zaps⚡⚡ (Esto es opcional)
![](https://image.nostr.build/cde06896707080ed92fff2cce1c4dd50b0095f371b8d2762fce7eeebf3696cd4.jpg)
Aqui en Nostr, utilizamos la red Lightning de Bitcoin (L2). Nesitaras una cartera lightning para enviar y recibir Satoshis, la denominacion mas chiquita de un Bitcoin. (0.000000001 BTC) Los "zaps" son un tipo de micropago en Nostr. Si te gusta el contenido de un usario, es norma dejarle una propina en la forma de un ¨zap". Por ejemplo, si te gusta este contenido, tu me puedes hacer "zap" con Satoshis para recompensar mi trabajo. Pero apenas llegaste, as que todavia no tienes una cartera. No se preocupe, puedo ayudar en eso!
"[Stacker.News](https://stacker.news/r/Hamstr)" es una plataforma donde los usarios pueden ganar SATS por publicar articulos y interactuar con otros.
![](https://image.nostr.build/91198aba183890b479629ee346ba1aa3c9565ab863c665d91d24b0260917a131.jpg)
Stacker.News es el lugar mas facil para recibir una direccion de cartera Bitcoin Lightning.
1. Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
![](https://image.nostr.build/9d03196a588fc2936538803ae27a2cf057b1efaf4eff5558b9612b9fabc7bd31.png)
2. Haga clic en "edit" y elija un nombre que te guste. Se puede cambiar si deseas en el futuro.
![](https://image.nostr.build/34e9b0088dd5c12d4b9893e7818f0bee59f918699103ec0c45c01c384eb94164.jpg)
3. Haga clic en "save"
4. Crea una biografía y la comunidad SN son muy acogedora. Te mandarán satoshi para darte la bienvenida.
5. Tu nueva direccion de cartera Bitcoin Lightning aparecerá asi
![](https://image.nostr.build/8a3f373cedcebd967aef3ba0fba1504446f3e2b4bb497f7405a05ef3e145aca2.png)
**^^No le mandas "zaps" a esta direccion; es puramente con fines educativos.**
6. Con tu **Nueva** dirección de monedero Bitcoin Lightning puedes ponerla en cualquier cliente o app de tu elección. Para ello, ve a tu **página de perfil** y bajo la dirección de tu monedero en "**Dirección Lightning**", introduce tu nueva dirección y pulsa **"guardar "** y ya está. Enhorabuena.
👉✨Con el tiempo, es posible que desee pasar a las opciones de auto-custodia y tal vez incluso considerar la posibilidad de auto-alojar su propio nodo LN para una mejor privacidad. La buena noticia es que stacker.news tambien está dejando de ser una cartera custodio.
⭐NIP-05-identidad DNS⭐
Al igual que en Twitter, una marca de verificación es para mostrar que eres del mismo jardín "como un humano", y no un atípico como una mala hierba o, "bot". Pero no de la forma nefasta en que lo hacen las grandes tecnológicas. En el país de las maravillas de Nostr, esto te permite asignar tu llave de plata, "npub", a un identificador DNS. Una vez verificado, puedes gritar para anunciar tu nueva residencia Nostr para compartir.
✨Hay un montón de opciones, pero si has seguido los pasos, esto se vuelve extremadamente fácil.
👉✅¡Haz clic en tu **"Perfil "**, luego en **"Configuración "**, desplázate hasta la parte inferior y pega tu *clave Silver*, **"npub!"** y haz clic en **"Guardar "** y ¡listo! Utiliza tu monedero relámpago de Stacker.news como tu NIP-05. ¡¡¡Enhorabuena!!! ¡Ya estás verificado! Dale unas horas y cuando uses tu cliente **"principal "** deberías ver una marca de verificación.
### Nostr, el infonformista de los servidores.
![](https://image.nostr.build/2cf428a02cfc94365150112e12e541ff338390faf0718ed65957098123ca5016.jpg)
En lugar de utilizar una única instancia o un servidor centralizado, Nostr está construido para que varias bases de datos intercambien mensajes mediante "relés". Los relés, que son neutrales y no discriminatorios, almacenan y difunden mensajes públicos en la red Nostr. Transmiten mensajes a todos los demás clientes conectados a ellos, asegurando las comunicaciones en la red descentralizada.
### ¡Mis amigos en Nostr te dan la bienvenida!
Bienvenida a la fiesta. ¿Le apetece un té?🍵
![](https://image.nostr.build/03c85e38f0b8a5ed0721281ae23aca4f7d217bef5b12c8d8c2c127c6bb3189f6.jpg)
### ¡Hay mucho mas!
Esto es la punta del iceberg. Síguenme mientras continúo explorando nuevas tierras y a los desarolladores, los caballeres que potencioan este ecosistema. Encuéntrame aquí para mas contenido como este y comparten con otros usarios de nostr. Conozca a los caballeres que luchan por freedomTech (la tecnología de libertad) en Nostr y a los proyectos a los que contribuyen para hacerla realidad.💋
Onigirl
@npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
----
🧡😻Esta guía ha sido cuidadosamente traducida por miggymofongo
Puede seguirla aquí.
@npub1ajt9gp0prf4xrp4j07j9rghlcyukahncs0fw5ywr977jccued9nqrcc0cs
sitio [web](https://miguelalmodo.com/)
-
Lístky na festival Lunarpunku sú už v predaji [na našom crowdfunding portáli](https://pay.cypherpunk.today/apps/maY3hxKArQxMpdyh5yCtT6UWMJm/crowdfund). V predaji sú dva typy lístkov - štandardný vstup a špeciálny vstup spolu s workshopom oranžového leta.
Neváhajte a zabezpečte si lístok, čím skôr to urobíte, tým bude festival lepší.
Platiť môžete Bitcoinom - Lightningom aj on-chain. Vaša vstupenka je e-mail adresa (neposielame potvrdzujúce e-maily, ak platba prešla, ste in).
[Kúpte si lístok](https://pay.cypherpunk.today/apps/maY3hxKArQxMpdyh5yCtT6UWMJm/crowdfund)
-
Čo nám prinášajú exotické protokoly ako Nostr, Cashu alebo Reticulum? Šifrovanie, podpisovanie, peer to peer komunikáciu, nové spôsoby šírenia a odmeňovania obsahu.
Ukážeme si kúl appky, ako sa dajú jednotlivé siete prepájať a ako spolu súvisia.
-
Podnikanie je jazyk s "crystal clear" pravidlami.
Inštrumentalisti vidia podnikanie staticky, a toto videnie prenášajú na spoločnosť. Preto nás spoločnosť vníma často negatívne. Skutoční podnikatelia sú však "komunikátori".
Jozef Martiniak je zakladateľ AUSEKON - Institute of Austrian School of Economics
-
Ako sa snažím praktizovať LunarPunk bez budovania opcionality "odchodom" do zahraničia. Nie každý je ochotný alebo schopný meniť "miesto", ako však v takom prípade minimalizovať interakciu so štátom? Nie návod, skôr postrehy z bežného života.
-
Tento rok vás čaká workshop na tému "oranžové leto" s Jurajom Bednárom a Mariannou Sádeckou. Dozviete sa ako mení naše vnímanie skúsenosť s Bitcoinom, ako sa navigovať v dnešnom svete a odstrániť mentálnu hmlu spôsobenú fiat životom.
Na workshop je potrebný [extra lístok](https://pay.cypherpunk.today/apps/maY3hxKArQxMpdyh5yCtT6UWMJm/crowdfund) (môžete si ho dokúpiť aj na mieste).
Pre viac informácií o oranžovom lete odporúčame pred workshopom vypočuťi si [podcast na túto tému](https://juraj.bednar.io/podcast/2024/04/13/oranzove-leto-stanme-sa-tvorcami-svojho-zivota-s-mariannou-sadeckou/).
-
# Estórias
* [O caso da Grêmio TV](nostr:naddr1qqyxzce3vguxvvfkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823caz985v)
* [Jofer](nostr:naddr1qqyxxdt9x4snwwpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cgsxml2)
* [Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte](nostr:naddr1qqyrsdrpxverwdmrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c724d7h)
* [O Planetinha](nostr:naddr1qqyxzvnzv9jrgef5qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cgmfd3v)
-
# Why relay hints are important
Recently [Coracle has removed support](nostr:nevent1qqsfmgthccjuz7quucel20wjanh80sp8nxf5ujgpj5hwdzk8japavzgpzemhxue69uhky6t5vdhkjmn9wgh8xmmrd9skcq3qjlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qca68ht) for following relay hints in Nostr event references.
Supposedly Coracle is now relying only on public key hints and `kind:10002` events to determine where to fetch events from a user. That is a catastrophic idea that destroys much of Nostr's flexibility for no gain at all.
* Someone makes a post inside a community (either a NIP-29 community or a NIP-87 community) and others want to refer to that post in discussions in the external Nostr world of `kind:1`s -- now that cannot work because the person who created the post doesn't have the relays specific to those communities in their outbox list;
* There is a discussion happening in a niche relay, for example, a relay that can only be accessed by the participants of a conference for the duration of that conference -- since that relay is not in anyone's public outbox list, it's impossible for anyone outside of the conference to ever refer to these events;
* Some big public relays, say, _relay.damus.io_, decide to nuke their databases or periodically delete old events, a user keeps using that big relay as their outbox because it is fast and reliable, but chooses to archive their old events in a dedicated archival relay, say, _cellar.nostr.wine_, while prudently not including that in their outbox list because that would make no sense -- now it is impossible for anyone to refer to old notes from this user even though they are publicly accessible in _cellar.nostr.wine_;
* There are [topical relays](nostr:naddr1qqyrze35vscrzvfcqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c0z85e2) that curate content relating to niche (non-microblogging) topics, say, cooking recipes, and users choose to publish their recipes to these relays only -- but now they can't refer to these relays in the external Nostr world of `kind:1`s because these topical relays are not in their outbox lists.
* Suppose a user wants to maintain two different identities under the same keypair, say, one identity only talks about soccer in English, while the other only talks about art history in French, and the user very prudently keeps two different `kind:10002` events in two different sets of "indexer" relays (or does it in some better way of announcing different relay sets) -- now one of this user's audiences cannot ever see notes created by him with their other persona, one half of the content of this user will be inacessible to the other half and vice-versa.
* If for any reason a relay does not want to accept events of a certain kind a user may publish to other relays, and it would all work fine if the user referenced that externally-published event from a normal event, but now that externally-published event is not reachable because the external relay is not in the user's outbox list.
* If someone, say, Alex Jones, is hard-banned everywhere and cannot event broadcast `kind:10002` events to any of the commonly used index relays, that person will now appear as banned in most clients: in an ideal world in which clients followed `nprofile` and other relay hints Alex Jones could still live a normal Nostr life: he would print business cards with his `nprofile` instead of an `npub` and clients would immediately know from what relay to fetch his posts. When other users shared his posts or replied to it, they would include a relay hint to his personal relay and others would be able to see and then start following him on that relay directly -- now Alex Jones's events cannot be read by anyone that doesn't already know his relay.
-
# Embracing AI: A Case for AI Accelerationism
In an era where artificial intelligence (AI) development is at the forefront of technological innovation, a counter-narrative championed by a group I refer to as the 'AI Decels'—those advocating for the deceleration of AI advancements— seems to be gaining significant traction. After tuning into a recent episode of the [Joe Rogan Podcast](https://fountain.fm/episode/0V35t9YBkOMVM4WRVLYp), I realized that the prevailing narrative around AI was heading in a dangerous direction. Rogan had Aza Raskin and Tristan Harris, technology safety advocates, who released a talk called '[The AI Dilemma](https://www.youtube.com/watch?v=xoVJKj8lcNQ),' on for a discussion. You may know them from the popular documentary '[The Social Dilemma](https://www.thesocialdilemma.com/)' on the dangers of social media. It became increasingly clear that the cautionary stance dominating this discourse might be tipping the scales too far, veering towards an over-regulated future that stifles innovation rather than fostering it.
![](8046488-1703007156335-9e4d055bcadad.jpg)
## Are we moving too fast?
While acknowledging AI's benefits, Aza and Tristan fear it could be dangerous if not guided by ethical standards and safeguards. They believe AI development is moving too quickly and that the right incentives for its growth are not in place. They are concerned about the possibility of "civilizational overwhelm," where advanced AI technology far outpaces 21st-century governance. They fear a scenario where society and its institutions cannot manage or adapt to the rapid changes and challenges introduced by AI.
They argue for regulating and slowing down AI development due to rapid, uncontrolled advancement driven by competition among companies like Google, OpenAI, and Microsoft. They claim this race can lead to unsafe releases of new technologies, with AI systems exhibiting unpredictable, emergent behaviors, posing significant societal risks. For instance, AI can inadvertently learn tasks like sentiment analysis or human emotion understanding, creating potential for misuse in areas like biological weapons or cybersecurity vulnerabilities.
Moreover, AI companies' profit-driven incentives often conflict with the public good, prioritizing market dominance over safety and ethics. This misalignment can lead to technologies that maximize engagement or profits at societal expense, similar to the negative impacts seen with social media. To address these issues, they suggest government regulation to realign AI companies' incentives with safety, ethical considerations, and public welfare. Implementing responsible development frameworks focused on long-term societal impacts is essential for mitigating potential harm.
## This isn't new
Though the premise of their concerns seems reasonable, it's dangerous and an all too common occurrence with the emergence of new technologies. For example, in their example in the podcast, they refer to the technological breakthrough of oil. Oil as energy was a technological marvel and changed the course of human civilization. The embrace of oil — now the cornerstone of industry in our age — revolutionized how societies operated, fueled economies, and connected the world in unprecedented ways. Yet recently, as ideas of its environmental and geopolitical ramifications propagated, the narrative around oil has shifted.
Tristan and Aza detail this shift and claim that though the period was great for humanity, we didn't have another technology to go to once the technological consequences became apparent. The problem with that argument is that we did innovate to a better alternative: nuclear. However, at its technological breakthrough, it was met with severe suspicions, from safety concerns to ethical debates over its use. This overregulation due to these concerns caused a decades-long stagnation in nuclear innovation, where even today, we are still stuck with heavy reliance on coal and oil. The scare tactics and fear-mongering had consequences, and, interestingly, they don't see the parallels with their current deceleration stance on AI.
These examples underscore a critical insight: the initial anxiety surrounding new technologies is a natural response to the unknowns they introduce. Yet, history shows that too much anxiety can stifle the innovation needed to address the problems posed by current technologies. The cycle of discovery, fear, adaptation, and eventual acceptance reveals an essential truth—progress requires not just the courage to innovate but also the resilience to navigate the uncertainties these innovations bring.
Moreover, believing we can predict and plan for all AI-related unknowns reflects overconfidence in our understanding and foresight. History shows that technological progress, marked by unexpected outcomes and discoveries, defies such predictions. The evolution from the printing press to the internet underscores progress's unpredictability. Hence, facing AI's future requires caution, curiosity, and humility. Acknowledging our limitations and embracing continuous learning and adaptation will allow us to harness AI's potential responsibly, illustrating that embracing our uncertainties, rather than pretending to foresee them, is vital to innovation.
The journey of technological advancement is fraught with both promise and trepidation. Historically, each significant leap forward, from the dawn of the industrial age to the digital revolution, has been met with a mix of enthusiasm and apprehension. Aza Raskin and Tristan Harris's thesis in the 'AI Dilemma' embodies the latter.
## Who defines "safe?"
When slowing down technologies for safety or ethical reasons, the issue arises of who gets to define what "safe" or “ethical” mean? This inquiry is not merely technical but deeply ideological, touching the very core of societal values and power dynamics. For example, the push for Diversity, Equity, and Inclusion (DEI) initiatives shows how specific ideological underpinnings can shape definitions of safety and decency.
Take the case of the initial release of Google's AI chatbot, Gemini, which chose the ideology of its creators over truth. Luckily, the answers were so ridiculous that the pushback was sudden and immediate. My worry, however, is if, in correcting this, they become experts in making the ideological capture much more subtle. Large bureaucratic institutions' top-down safety enforcement creates a fertile ground for ideological capture of safety standards.
![](Screenshot%202024-02-27%20at%207.26.46%E2%80%AFPM.png)
I claim that the issue is not the technology itself but the lens through which we view and regulate it. Suppose the gatekeepers of 'safety' are aligned with a singular ideology. In that case, AI development would skew to serve specific ends, sidelining diverse perspectives and potentially stifling innovative thought and progress.
In the podcast, Tristan and Aza suggest such manipulation as a solution. They propose using AI for consensus-building and creating "shared realities" to address societal challenges. In practice, this means that when individuals' viewpoints seem to be far apart, we can leverage AI to "bridge the gap." How they bridge the gap and what we would bridge it toward is left to the imagination, but to me, it is clear. Regulators will inevitably influence it from the top down, which, in my opinion, would be the opposite of progress.
In navigating this terrain, we must advocate for a pluralistic approach to defining safety, encompassing various perspectives and values achieved through market forces rather than a governing entity choosing winners. The more players that can play the game, the more wide-ranging perspectives will catalyze innovation to flourish.
## Ownership & Identity
Just because we should accelerate AI forward does not mean I do not have my concerns. When I think about what could be the most devastating for society, I don't believe we have to worry about a Matrix-level dystopia; I worry about freedom. As I explored in "[Whose data is it anyway?](https://cwilbzz.com/whose-data-is-it-anyway/)," my concern gravitates toward the issues of data ownership and the implications of relinquishing control over our digital identities. This relinquishment threatens our privacy and the integrity of the content we generate, leaving it susceptible to the inclinations and profit of a few dominant tech entities.
To counteract these concerns, a paradigm shift towards decentralized models of data ownership is imperative. Such standards would empower individuals with control over their digital footprints, ensuring that we develop AI systems with diverse, honest, and truthful perspectives rather than the massaged, narrow viewpoints of their creators. This shift safeguards individual privacy and promotes an ethical framework for AI development that upholds the principles of fairness and impartiality.
As we stand at the crossroads of technological innovation and ethical consideration, it is crucial to advocate for systems that place data ownership firmly in the hands of users. By doing so, we can ensure that the future of AI remains truthful, non-ideological, and aligned with the broader interests of society.
## But what about the Matrix?
I know I am in the minority on this, but I feel that the concerns of AGI (Artificial General Intelligence) are generally overblown. I am not scared of reaching the point of AGI, and I think the idea that AI will become so intelligent that we will lose control of it is unfounded and silly. Reaching AGI is not reaching consciousness; being worried about it spontaneously gaining consciousness is a misplaced fear. It is a tool created by humans for humans to enhance productivity and achieve specific outcomes.
At a technical level, large language models (LLMs) are trained on extensive datasets and learning patterns from language and data through a technique called "unsupervised learning" (meaning the data is untagged). They predict the next word in sentences, refining their predictions through feedback to improve coherence and relevance. When queried, LLMs generate responses based on learned patterns, simulating an understanding of language to provide contextually appropriate answers. They will only answer based on the datasets that were inputted and scanned.
AI will never be "alive," meaning that AI lacks inherent agency, consciousness, and the characteristics of life, not capable of independent thought or action. AI cannot act independently of human control. Concerns about AI gaining autonomy and posing a threat to humanity are based on a misunderstanding of the nature of AI and the fundamental differences between living beings and machines. AI spontaneously developing a will or consciousness is more similar to thinking a hammer will start walking than us being able to create consciousness through programming. Right now, there is only one way to create consciousness, and I'm skeptical that is ever something we will be able to harness and create as humans. Irrespective of its complexity — and yes, our tools will continue to become evermore complex — machines, specifically AI, cannot transcend their nature as non-living, inanimate objects programmed and controlled by humans.
![](6u1bgq490h8c1.jpeg)
The advancement of AI should be seen as enhancing human capabilities, not as a path toward creating autonomous entities with their own wills. So, while AI will continue to evolve, improve, and become more powerful, I believe it will remain under human direction and control without the existential threats often sensationalized in discussions about AI's future.
With this framing, we should not view the race toward AGI as something to avoid. This will only make the tools we use more powerful, making us more productive. With all this being said, AGI is still much farther away than many believe.
Today's AI excels in specific, narrow tasks, known as narrow or weak AI. These systems operate within tightly defined parameters, achieving remarkable efficiency and accuracy that can sometimes surpass human performance in those specific tasks. Yet, this is far from the versatile and adaptable functionality that AGI represents.
Moreover, the exponential growth of computational power observed in the past decades does not directly translate to an equivalent acceleration in achieving AGI. AI's impressive feats are often the result of massive data inputs and computing resources tailored to specific tasks. These successes do not inherently bring us closer to understanding or replicating the general problem-solving capabilities of the human mind, which again would only make the tools more potent in _our_ hands.
While AI will undeniably introduce challenges and change the aspects of conflict and power dynamics, these challenges will primarily stem from humans wielding this powerful tool rather than the technology itself. AI is a mirror reflecting our own biases, values, and intentions. The crux of future AI-related issues lies not in the technology's inherent capabilities but in how it is used by those wielding it. This reality is at odds with the idea that we should slow down development as our biggest threat will come from those who are not friendly to us.
## AI Beget's AI
While the unknowns of AI development and its pitfalls indeed stir apprehension, it's essential to recognize the power of market forces and human ingenuity in leveraging AI to address these challenges. History is replete with examples of new technologies raising concerns, only for those very technologies to provide solutions to the problems they initially seemed to exacerbate. It looks silly and unfair to think of fighting a war with a country that never embraced oil and was still primarily getting its energy from burning wood.
![](Screenshot%202024-06-12%20at%205.13.16%E2%80%AFPM.png)
The evolution of AI is no exception to this pattern. As we venture into uncharted territories, the potential issues that arise with AI—be it ethical concerns, use by malicious actors, biases in decision-making, or privacy intrusions—are not merely obstacles but opportunities for innovation. It is within the realm of possibility, and indeed, probability, that AI will play a crucial role in solving the problems it creates. The idea that there would be no incentive to address and solve these problems is to underestimate the fundamental drivers of technological progress.
Market forces, fueled by the demand for better, safer, and more efficient solutions, are powerful catalysts for positive change. When a problem is worth fixing, it invariably attracts the attention of innovators, researchers, and entrepreneurs eager to solve it. This dynamic has driven progress throughout history, and AI is poised to benefit from this problem-solving cycle.
Thus, rather than viewing AI's unknowns as sources of fear, we should see them as sparks of opportunity. By tackling the challenges posed by AI, we will harness its full potential to benefit humanity. By fostering an ecosystem that encourages exploration, innovation, and problem-solving, we can ensure that AI serves as a force for good, solving problems as profound as those it might create. This is the optimism we must hold onto—a belief in our collective ability to shape AI into a tool that addresses its own challenges and elevates our capacity to solve some of society's most pressing issues.
## An AI Future
The reality is that it isn't whether AI will lead to unforeseen challenges—it undoubtedly will, as has every major technological leap in history. The real issue is whether we let fear dictate our path and confine us to a standstill or embrace AI's potential to address current and future challenges.
The approach to solving potential AI-related problems with stringent regulations and a slowdown in innovation is akin to cutting off the nose to spite the face. It's a strategy that risks stagnating the U.S. in a global race where other nations will undoubtedly continue their AI advancements. This perspective dangerously ignores that AI, much like the printing press of the past, has the power to democratize information, empower individuals, and dismantle outdated power structures.
The way forward is not less AI but more of it, more innovation, optimism, and curiosity for the remarkable technological breakthroughs that will come. We must recognize that the solution to AI-induced challenges lies not in retreating but in advancing our capabilities to innovate and adapt.
AI represents a frontier of limitless possibilities. If wielded with foresight and responsibility, it's a tool that can help solve some of the most pressing issues we face today. There are certainly challenges ahead, but I trust that with problems come solutions. Let's keep the AI Decels from steering us away from this path with their doomsday predictions. Instead, let's embrace AI with the cautious optimism it deserves, forging a future where technology and humanity advance to heights we can't imagine.
-
I'm having some pain in my heart about the U.S. elections.
![](https://m.stacker.news/34573)
Ever since Obama campaigned for office, an increase of young voters have come out of the woodwork. Things have not improved. They've actively told you that "**your vote matters.**" I believe this to be a lie unless any citizen can demand at the gate, at the White House to be allowed to hold and point a gun to the president's head. (**Relax**, this is a hyperbole)
Why so dramatic?
Well, what does the president do? Sign bills, commands the military, nominates new Fed chairman, ambassadors, supreme judges and senior officials all while traveling in luxury planes and living in a white palace for four years.
They promised **Every TIME** to protect citizen rights when they take the oath and office.
*...They've broken this several times, with so-called "emergency-crisis"*
![](https://m.stacker.news/34576)
The purpose of a president, today, it seems is to basically hire armed thugs to keep the citizens in check and make sure you "voluntarily continue to be a slave," to the system, hence the IRS. The corruption extends from the cop to the judge and even to politicians. The politicians get paid from lobbyists to create bills in congress for the president to sign. There's no right answer when money is involved with politicians. It is the same if you vote Obama, Biden, Trump, or Haley. They will wield the pen to serve themselves to **say** it will benefit the country.
In the first 100 years of presidency, the government wasn't even a big deal. They didn't even interfere with your life as much as they do today.
![](https://m.stacker.news/34575)
^^ You hold the **power** in your hands, don't let them take it. Don't believe me? Try to get a loan from a bank without a signature. Your signature is as good as gold (if not better) and is an original trademark.
Just Don't Vote.
End the Fed.
Opt out.
![](https://m.stacker.news/34574)
^^ I choose to form my own path, even if it means leaving everything I knew prior. It doesn't have to be a spiritual thing. Some, have called me religious because of this. We're all capable of greatness and having humanity.
✨Don't have a machine heart with a machine mind. Instead, choose to have a heart like the cowardly lion from the "**Wizard Of Oz.**"
There's no such thing as a good president or politicians.
If there was, they would have issued *non-interest* **Federal Reserve Notes.** Lincoln and Kennedy tried to do this, they got shot.
There's still a banner of America there, but it's so far gone that I cannot even recognize it. However, I only see a bunch of 🏳🌈 pride flags.
✨Patrick Henry got it wrong, when he delivered his speech, "Give me liberty or give me death." Liberty and freedom are two completely different things.
Straightforward from Merriam-Webster
Choose Right or left?
![](https://m.stacker.news/34587)
No control, to be 100% without restrictions- free.
![](https://m.stacker.news/34588)
✨I disagree with the example sentence given. Because you cannot advocate for human freedom and own slaves, it's contradicting it. Which was common in the founding days.
I can understand many may disagree with me, and you might be thinking, "**This time will be different.**" I, respectfully, disagree, and the proxy wars are proof. Learn the importance of Bitcoin, every **Satoshi** is a step away from corruption.
✨What does it look like to pull the curtains from the "**Wizard of Oz?**"
![](https://imgs.search.brave.com/Ws6je3deRy1q49UBFk_Y-BotBo7Y-CyCK-lNoEL2cak/rs:fit:860:0:0/g:ce/aHR0cHM6Ly9pLmt5/bS1jZG4uY29tL3Bo/b3Rvcy9pbWFnZXMv/bmV3c2ZlZWQvMDAx/LzM4OC84NjQvOTZl)
Have you watched the video below, what 30 Trillion dollars in debt looks like visually? Even I was blown away.
https://video.nostr.build/d58c5e1afba6d7a905a39407f5e695a4eb4a88ae692817a36ecfa6ca1b62ea15.mp4
![](https://m.stacker.news/34615)
I say this with love. Hear my plea?
Normally, I don't write about anything political. It just feels like a losing game. My energy feels it's in better use to learn new things, write and to create. Even a simple blog post as simple as this. Stack SATs, and stay humble.
<3
Onigirl
-
# About Nostr, email and subscriptions
I check my emails like once or twice a week, always when I am looking for something specific in there.
Then I go there and I see a bunch of other stuff I had no idea I was missing. Even many things I wish I had seen before actually. And sometimes people just expect and assume I would have checked emails instantly as they arrived.
It's so weird because I'm not making a point, I just don't remember to open the damn "gmail.com" URL.
---
I remember some people were making some a Nostr service a while ago that sent a DM to people with Nostr articles inside -- or some other forms of "subscription services on Nostr". It makes no sense at all.
Pulling in DMs from relays is exactly the same process (actually slightly more convoluted) than pulling normal public events, so why would a service assume that "sending a DM" was more likely to reach the target subscriber when the target had explicitly subscribed to that topic or writer?
Maybe due to how some specific clients work that is true, but fundamentally it is a very broken assumption that comes from some fantastic past era in which emails were 100% always seen and there was no way for anyone to subscribe to someone else's posts.
Building around such broken assumptions is the wrong approach. Instead we should be building new flows for subscribing to specific content from specific Nostr-native sources (creators directly or manual or automated curation providers, communities, relays etc), which is essentially what most clients are already doing anyway, but specifically Coracle's new custom feeds come to mind now.
---
This also [reminds me](nostr:nevent1qqsda83vup73lhv6m4mee2wka83dzuwf78e95wtpn70r6ce99e8ah4gpr9mhxue69uhhq7tjv9kkjepwve5kzar2v9nzucm0d5pzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vammnc95) of the interviewer asking the Farcaster creator if Farcaster made "email addresses available to content creators" completely ignoring all the cryptography and nature of the protocol (Farcaster is shit, but at least they tried, and in this example you could imagine the interviewer asking the same thing about Nostr).
I imagine that if the interviewer had asked these people who were working (or suggesting) the Nostr DM subscription flow they would have answered: "no, you don't get their email addresses, but you can send them uncensorable DMs!" -- and that, again, is getting everything backwards.
-
# Bitcoin transactions explained
A transaction is a piece of data that takes **inputs** and produces **outputs**. Forget about the blockchain thing, Bitcoin is actually just a big tree of transactions. The blockchain is just a way to keep transactions ordered.
Imagine you have 10 satoshis. That means you have them in an unspent transaction output (**UTXO**). You want to spend them, so you create a transaction. The transaction should reference unspent outputs as its inputs. Every transaction has an immutable id, so you use that id plus the index of the output (because transactions can have multiple outputs). Then you specify a **script** that unlocks that transaction and related signatures, then you specify outputs along with a **script** that locks these outputs.
![illustration of a simple bitcoin transaction](/static/bitcoin-transaction-sequence-drawing.png)
As you can see, there's this lock/unlocking thing and there are inputs and outputs. Inputs must be unlocked by fulfilling the conditions specified by the person who created the transaction they're in. And outputs must be locked so anyone wanting to spend those outputs will need to unlock them.
For most of the cases locking and unlocking means specifying a **public key** whose controller (the person who has the corresponding **private key**) will be able to spend. Other fancy things are possible too, but we can ignore them for now.
Back to the 10 satoshis you want to spend. Since you've successfully referenced 10 satoshis and unlocked them, now you can specify the outputs (this is all done in a single step). You can specify one output of 10 satoshis, two of 5, one of 3 and one of 7, three of 3 and so on. The sum of outputs can't be more than 10. And if the sum of outputs is less than 10 the difference goes to fees. In the first days of Bitcoin you didn't need any fees, but now you do, otherwise your transaction won't be included in any block.
![illustration of a complex bitcoin transaction](/static/bitcoin-transaction-complex-drawing.png)
If you're still interested in transactions maybe you could take a look at [this small chapter](https://github.com/bitcoinbook/bitcoinbook/blob/6d1c26e1640ae32b28389d5ae4caf1214c2be7db/ch06_transactions.adoc) of that Andreas Antonopoulos book.
If you hate Andreas Antonopoulos because he is a communist shitcoiner or don't want to read more than half a page, go here: <https://en.bitcoin.it/wiki/Coin_analogy>
-
![](https://image.nostr.build/118fdf2359cb4e6622b0809244a75cbd828d4ab6c91baa813bcf8f2ee315020d.jpg)
It's been a journey from the Publishing Forest of Nostr to the open sea of web3. I've come across a beautiful chain of islands and thought. Why not take a break and explore this place? If I'm searching for devs and FOSS, I should search every nook and cranny inside the realm of Nostr. It is quite vast for little old me. I'm just a little hamster and I don't speak in code or binary numbers zeros and ones.
![](https://image.nostr.build/7950568481a0c859fe532bb3b2ebd3c85b726e8e3826aed16aecbdac36913f26.jpg)
After being in sea for awhile, my heart raced for excitement for what I could find. It seems I wasn't alone, there were others here like me! Let's help spread the message to others about this uncharted realm. See, look at the other sailboats, aren't they pretty? Thanks to some generous donation of SATs, I was able to afford the docking fee.
![](https://image.nostr.build/15a48283725e1bac2b5958c0decf2b95d41cd66393e589deb9188cf3ff863078.jpg)
Ever feel like everyone was going to a party, and you were supposed to dress up, but you missed the memo? Or a comic-con? well, I felt completely underdressed and that's an understatement. Well, turns out there is a some knights around here. Take a peek!
![](https://image.nostr.build/fe0c0cd914d54e7e6a3eb4d52ea7be0c92cf19b2b9cceddaaba8673ef9d46331.jpg)
A black cat with a knight passed by very quickly. He was moving too fast for me to track. Where was he going? Then I spotted a group of knights heading in the same direction, so I tagged along. The vibes from these guys was impossible to resist. They were just happy-go-lucky. 🥰They were heading to a tavern on a cliff off the island.
![](https://image.nostr.build/0cd9dd44497fd2d3ba6d1e75d31809638274539f57d0e0ccb859c48a760b7793.jpg)
Ehh? a Tavern? Slightly confused, whatever could these knights be doing here? I guess when they're done with their rounds they would here to blow off steam. Things are looking curiouser and curiouser. But the black cat from earlier was here with its rider, whom was dismounting. So you can only guess, where I'm going.
![](https://image.nostr.build/974f5d750e790134c7394065fa9613999ee04e2f009f5e5cdcc8680031b9f002.jpg)
The atmosphere in this pub, was lively and energetic. So many knights spoke among themselves. A group here, another there, but there was one that caught my eye. I went up to a group at a table, whose height towed well above me even when seated. Taking a deep breath, I asked, "Who manages this place?" They unanimous pointed to one waiting for ale at the bar. What was he doing? Watching others talk? How peculiar.
![](https://image.nostr.build/411646a28602a5082f7f8002a7fc1778d71ffac32b28e80a3128ea9b1fe44b5a.jpg)
So I went up to him! And introduced myself.
>"Hello I'm Onigirl"
>>"Hello Onigirl, Welcome to Gossip"
>>>"Gossip, what is Gossip?" scratching my head and whiskers.
What is Gossip?
Gossip is FOSS and a great client for **privacy-centric minded nostriches**. It avoids browser tech which by-passes several scripting languages such as JavaScript☕, HTML parsing, rendering, and CSS(Except HTTP GET and Websockets). Using OpenGL-style rendering. **For Nostriches that wish to remain anonymous can use Gossip over TOR.** Mike recommends using QubesOS, Whonix and or Tails. [FYI-Gossip does not natively support tor SOCKS5 proxy]
Most helpful to spill the beans if you're a journalist.
![](https://image.nostr.build/5d20cfe35748ad97ed8f7ebfe4dc70891f0752db380d29d8fb977baa4e64b000.jpg)
On top of using your nsec or your encryption key, Gossip adds another layer of security over your account with a password login. There's nothing wrong with using the browser extensions (such as nos2x or Flamingo) which makes it super easy to log in to Nostr enable websites, apps, but it does expose you to browser vulnerabilities.
Mike Points out
>"people have already had their private key stolen from other nostr clients,"
so it a concern if you value your account. I most certainly care for mine.
![](https://image.nostr.build/0c8e7bbb8ef1ef265d8aa0d99b0fc03a017ceb685bf7bbcf57a3cd30004de4f8.jpg)
Gossip UI has a simple, and clean interface revolving around NIP-65 also called the “Outbox model." As posted from [GitHub](https://github.com/nostr-protocol/nips/blob/master/65.md),
>"This NIP allows Clients to connect directly with the most up-to-date relay set from each individual user, eliminating the need of broadcasting events to popular relays."
This eliminates clients that track only a specific set of relays which can congest those relays when you publish your note. Also this can be censored, by using Gossip you can publish notes to alternative relays that have not censored you to reach the same followers.
👉The easiest way to translate that is reducing redundancy to publish to popular relays or centralized relays for content reach to your followers.
![](https://image.nostr.build/4813775d4d64a18de7506c0f08baecf4106b0343e89881878dd1cdd0c6f2ad07.jpg)
Cool! What an awesome client, I mean Tavern! What else does this knight do? He reaches for something in his pocket. what is it? A Pocket is a database for storing and retrieving nostr events but mike's written it in Rust with a few extra kinks inspired by Will's nostrdb. Still in development, but it'll be another tool for you dear user! 💖💕💚
Onigirl is proud to present this knights to the community and honor them with kisu. 💋💋💋 Show some 💖💘💓🧡💙💚
👉💋💋Will - jb55 Lord of apples 💋
@npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s
👉💋💋 Mike Knight - Lord of Security 💋
@npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c
![](https://image.nostr.build/596975250b7eae45f9e3715c7456f9c8a9da86576ab39663cb8d1bea8276042a.jpg)
Knights spend a lot of time behind the screen coding for the better of humanity. It is a tough job! Let's appreciate these knights, relay operators, that support this amazing realm of Nostr! FOSS for all!
![](https://image.nostr.build/1d19dbc065b3486db86d993fb2057114948086cb9dfbae2ba46cef9110ab6d1f.jpg)
This article was prompted for the need for privacy and security of your data. They're different, not to be confused.
Recently, Edward Snowden warns Bitcoin devs about the need for privacy, [Quote](https://twitter.com/Snowden/status/1786170805728039127):
>“I've been warning Bitcoin developers for ten years that privacy needs to be provided for at the protocol level. This is the final warning. The clock is ticking.”
Snowden’s comments come after heavy actions of enforcement from Samarai Wallet, Roger Ver, Binance’s CZ, and now the closure of Wasabi Wallet. Additionally, according to [CryptoBriefing](https://cryptobriefing.com/trezor-coinjoin-shutdown/), Trezor is ending it’s CoinJoin integration as well. Many are concerned over the new definition of a money transmitter, which includes even those who don’t touch the funds.
### Help your favorite the hamster
![](https://image.nostr.build/38b8ec45f2bd966fd20de307709beabeaaed3002c2cb71404b792ea5649134c5.jpg)
^^Me drowning in notes on your feed. I can only eat so many notes to find you.
👉If there are any XMPP fans on here. I'm open to the idea of opening a public channel, so you could follow me on that as a *forum-like style*. My server of choice would likely be a German server.😀You would be receiving my articles as njump.me style or website-like. GrapeneOS users, you can download Cheogram app from the F-Driod store for free to access. Apple and Andriod users are subjected to pay to download this app, an alternative is **ntalk or conversations**. *If it interests the community, just FYI. Please comment or DM.*
👉If you enjoyed this content, please consider **reposting/sharing** as *my content is easily drowned by notes* on your feed. You could also *join my community under Children_Zone where I post my content.*
An alternative is by following #onigirl Just FYI this feature is currently a little buggy.
Follow as I search for tools and awesome devs to help you dear user live a decentralized life as I explore the realm of Nostr.
Thank you
Fren
-
## Hello (N)osytrs!
![image](https://img.pinstr.app/aHR0cHM6Ly9pbWFnZS5ub3N0ci5idWlsZC83MDRiNTgwOWY1NTg5M2NhNTE0MDBhZjdmOTljYzY4MzM2YmYwNTlhNDY3ZmFlMmYxZjc1Y2NkMDUwNmE3NGMyLmpwZw%3D%3D?w=700&)
### Yes! I'm calling you an (N)oystr!
![image](https://img.pinstr.app/aHR0cHM6Ly9pbWFnZS5ub3N0ci5idWlsZC84ZjFhZjlhOWRjZGRkMTExMmIxNzRjNWY5OTc1Y2YxNTRkM2RkYzQ5M2Y4YWZjNTkxZDEwNmVlMWM0NjExNWU5LmpwZw%3D%3D?w=700&)
Why is that? Because you shine, and I'm not just saying that to get more SATs.
Ordinary Oysters and mussels can produce these beauties! Nothing seriously unique about them, however, with a little time and love each oyster is capable of creating something truly beautiful. I like believing so, at least, given the fact that you're even reading this article; makes you an (N)oystr! This isn't published this on X (formerly known as Twitter), Facebook, Discord, Telegram, or Instagram, which makes you the rare breed! A pearl indeed! I do have access to those platforms, but why create content on a terrible platform knowing I too could be shut down! Unfortunately, many people still use these platforms. This forces individuals to give up their privacy every day. Meta is leading the charge by forcing users to provide a photo ID for verification in order to use their crappy, obsolete site. If that was not bad enough, imagine if you're having a type of disagreement or opinion. Then, Bigtech can easily deplatform you. Umm. So no open debate? Just instantly shut-off users. Whatever, happened to right to a fair trial? Nope, just burning you at the stake as if you're a witch or warlock!
![image](https://image.nostr.build/1a11ededaad493046f331401a8462423c272eaf8281d4f0ea17c7420bb5530cb.gif)
How heinous are the perpetrators and financiers of this? Well, that's opening another can of worms for you.
Imagine your voice being taken away, like the little mermaid. Ariel was lucky to have a prince, but the majority of us? The likelihood that I would be carried away by the current of the sea during a sunset with a prince on a sailboat is zero. And I live on an island, so I'm just missing the prince, sailboat(though I know where I could go to steal one), and red hair. Oh my gosh, now I feel sad.
![image](https://img.pinstr.app/aHR0cHM6Ly9pbWFnZS5ub3N0ci5idWlsZC8wNTZkNTVhYmMxNTdlNTcwYWZhYjQwM2I5NmEzZTYzNzA1MTE1NTkwMjIzNTdlYzYzNGEwYTQ3NGNmZWExMzAxLmpwZw%3D%3D?w=700&)
I do not have the prince, Bob is better!
I do not have mermaid fins, or a shell bra. Use coconut shells, it offers more support!
But, I still have my voice and a killer sunset to die for!
All of that is possible thanks to the work of developers. These knights fight for Freedom Tech by utilizing FOSS, which help provides us with a vibrant ecosystem. Unfortunately, I recently learned that **they are not all *funded***. Knights must eat, drink, and have a work space. This space is where they spend most of their sweat equity on an app or software that may and may not pan out. That brilliance is susceptible to fading, as these individuals are not seen but rather stay behind closed doors. What's worse, if these developers lose faith in their project and decide to join forces with Meta! 😖 Does WhatsApp ring a bell?
Without them, I probably wouldn't be able to create this long form article. Let's cheer them on like cheerleaders.. 👉Unfortunately, there's no cheerleader emoji so you'll just have to settle for a dancing lady, n guy. 💃🕺
Semisol said it beautifully,
npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj
> If we want freedom tech to succeed, the tools that make it possible need to be funded: relays like https://nostr.land, media hosts like https://nostr.build, clients like https://damus.io, etc.
With that thought, Onigirl is pleased to announce the launch of a new series. With a sole focus on free market devs/projects.
# Knights of Nostr!
![Knights of Nostr](https://image.nostr.build/dd6e5cbca87a7eec8d4ddebc8ad982bc5560ccd2415b24d5012ba07eb16796be.png)
I'll happily brief you about their exciting project and how it benefits humanity! Let's Support these Magnificent projects, devs, relays, and builders! Our first runner up!
Oppa Fishcake :Lord of Media Hosting
npub137c5pd8gmhhe0njtsgwjgunc5xjr2vmzvglkgqs5sjeh972gqqxqjak37w
Oppa Fishcake with his noble steed!
![](https://img.pinstr.app/aHR0cHM6Ly9pbWFnZS5ub3N0ci5idWlsZC8wNDgxNWRkM2NmMDkyMzI3MjI4NTAxYzkyNTUyZDE4ODJmYzRhYTYxMzZlN2ZkMWNkZDhiZGNiM2ZmY2ZhZTcyLmpwZw%3D%3D?w=700&)
Think of this as an introduction to learn and further your experience on Nostr! New developments and applications are constantly happening on Nostr. It's enough to make one's head spin. I may also cover FOSS projects(outside of Nostr) as they need some love as well! Plus, you can think of it as another tool to add to your decentralized life. I will not be doing how-to-Nostr guides. I personally feel there are plenty of great guides already available! Which I'm happy to add to curation collection via easily searchable on Yakihonne.
For email updates you can subscribe to my [[https://paragraph.xyz/@onigirl]]
If you like it, send me some 🧡💛💚 hearts💜💗💖 otherwise zap dat⚡⚡🍑🍑peach⚡⚡🍑
~If not me, then at least to our dearest knight!
Thank you from the bottom of my heart for your time and support (N)oystr! Shine bright like a diamond! Share if you care! FOSS power!
Follow on your favorite Nostr Client for the best viewing experience!
[!NOTE]
> I'm using Obsidian + Nostr Writer Plugin; a new way to publish Markdown directly to Nostr.
> I was a little nervous using this because I was used doing them in RStudio; R Markdown.
> Since this is my first article, I sent it to my account as a draft to test it. It's pretty neat.
-
# Nostr is not decentralized nor censorship-resistant
Peter Todd has been [saying this](nostr:nevent1qqsq5zzu9ezhgq6es36jgg94wxsa2xh55p4tfa56yklsvjemsw7vj3cpp4mhxue69uhkummn9ekx7mqpr4mhxue69uhkummnw3ez6ur4vgh8wetvd3hhyer9wghxuet5qy8hwumn8ghj7mn0wd68ytnddaksz9rhwden5te0dehhxarj9ehhsarj9ejx2aspzfmhxue69uhk7enxvd5xz6tw9ec82cspz3mhxue69uhhyetvv9ujuerpd46hxtnfduq3vamnwvaz7tmjv4kxz7fwdehhxarj9e3xzmnyqy28wumn8ghj7un9d3shjtnwdaehgu3wvfnsz9nhwden5te0wfjkccte9ec8y6tdv9kzumn9wspzpn92tr3hexwgt0z7w4qz3fcch4ryshja8jeng453aj4c83646jxvxkyvs4) for a long time and all the time I've been thinking he is misunderstanding everything, but I guess a more charitable interpretation is that he is right.
Nostr _today_ is indeed centralized.
Yesterday I published two harmless notes with the exact same content at the same time. In two minutes the notes had a noticeable difference in responses:
![](https://blob.satellite.earth/53b3eec9ffaada20b7c27dee4fa7a935adedcc337b9332b619c782b030eb5226)
The top one was published to `wss://nostr.wine`, `wss://nos.lol`, `wss://pyramid.fiatjaf.com`. The second was published to the relay where I generally publish all my notes to, `wss://pyramid.fiatjaf.com`, and that is announced on my [NIP-05 file](https://fiatjaf.com/.well-known/nostr.json) and on my [NIP-65](https://nips.nostr.com/65) relay list.
A few minutes later I published that screenshot again in two identical notes to the same sets of relays, asking if people understood the implications. The difference in quantity of responses can still be seen today:
![](https://blob.satellite.earth/df993c3fb91eaeff461186248c54f39c2eca3505b68dac3dc9757c77e9373379)
These results are skewed now by the fact that the two notes got rebroadcasted to multiple relays after some time, but the fundamental point remains.
What happened was that a huge lot more of people saw the first note compared to the second, and if Nostr was really censorship-resistant that shouldn't have happened at all.
Some people implied in the comments, with an air of obviousness, that publishing the note to "more relays" should have predictably resulted in more replies, which, again, shouldn't be the case if Nostr is really censorship-resistant.
What happens is that most people who engaged with the note are _following me_, in the sense that they have instructed their clients to fetch my notes on their behalf and present them in the UI, and clients are failing to do that despite me making it clear in multiple ways that my notes are to be found on `wss://pyramid.fiatjaf.com`.
If we were talking not about me, but about some public figure that was being censored by the State and got banned (or shadowbanned) by the 3 biggest public relays, the sad reality would be that the person would immediately get his reach reduced to ~10% of what they had before. This is not at all unlike what happened to dozens of personalities that were banned from the corporate social media platforms and then moved to other platforms -- how many of their original followers switched to these other platforms? Probably some small percentage close to 10%. In that sense Nostr today is similar to what we had before.
Peter Todd is right that if the way Nostr works is that you just subscribe to a small set of relays and expect to get everything from them then it tends to get very centralized very fast, and this is the reality today.
Peter Todd is wrong that Nostr is _inherently_ centralized or that it needs a _protocol change_ to become what it has always purported to be. He is in fact wrong today, because what is written above is not valid for all clients of today, and if we [drive in the right direction](nostr:naddr1qqykycekxd3nxdpcvgq3zamnwvaz7tmxd9shg6npvchxxmmdqgsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8grqsqqqa2803ksy8) we can successfully make Peter Todd be more and more wrong as time passes, instead of the contrary.
---
See also:
- [Censorship-resistant relay discovery in Nostr](nostr:naddr1qqykycekxd3nxdpcvgq3zamnwvaz7tmxd9shg6npvchxxmmdqgsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8grqsqqqa2803ksy8)
- [A vision for content discovery and relay usage for basic social-networking in Nostr](nostr:naddr1qqyrxe33xqmxgve3qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cywwjvq)
-
# Censorship-resistant relay discovery in Nostr
In [Nostr is not decentralized nor censorship-resistant](nostr:naddr1qqyrsdmpxgcrsepeqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c4n8rw6) I said Nostr is centralized. Peter Todd thinks it is centralized by design, but I disagree.
Nostr wasn't designed to be centralized. The idea was always that clients would follow people in the relays they decided to publish to, even if it was a single-user relay hosted in an island in the middle of the Pacific ocean.
But the Nostr explanations never had any guidance about how to do this, and the protocol itself never had any enforcement mechanisms for any of this (because it would be impossible).
My original idea was that clients would use some undefined combination of relay hints in reply tags and the (now defunct) `kind:2` relay-recommendation events plus some form of manual action ("it looks like Bob is publishing on relay X, do you want to follow him there?") to accomplish this. With the expectation that we would have a better idea of how to properly implement all this with more experience, Branle, my first working client didn't have any of that implemented, instead it used a stupid static list of relays with read/write toggle -- although it did publish relay hints and kept track of those internally and supported `kind:2` events, these things were not really useful.
[Gossip](https://github.com/mikedilger/gossip) was the first client to implement a [truly censorship-resistant relay discovery mechanism](https://mikedilger.com/gossip-relay-model.mp4) that used NIP-05 hints (originally proposed by [Mike Dilger](nprofile1qqswuyd9ml6qcxd92h6pleptfrcqucvvjy39vg4wx7mv9wm8kakyujgua442w)) relay hints and `kind:3` relay lists, and then with the simple insight of [NIP-65](https://nips.nostr.com/65) that got much better. After seeing it in more concrete terms, it became simpler to reason about it and the approach got popularized as the "gossip model", then implemented in clients like [Coracle](https://coracle.social) and [Snort](https://snort.social).
Today when people mention the "gossip model" (or "outbox model") they simply think about NIP-65 though. Which I think is ok, but too restrictive. I still think there is a place for the NIP-05 hints, `nprofile` and `nevent` relay hints and specially relay hints in event tags. All these mechanisms are used together in [ZBD Social](nostr:naddr1qqyxgvek8qmryc3eqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823chekfst), for example, but I believe also in the clients listed above.
I don't think we should stop here, though. I think there are other ways, perhaps drastically different ways, to approach content propagation and relay discovery. I think manual action by users is underrated and could go a long way if presented in a nice UX (not conceived by people that think users are dumb animals), and who knows what. Reliance on third-parties, hardcoded values, social graph, and specially a mix of multiple approaches, is what Nostr needs to be censorship-resistant and what I hope to see in the future.
-
# início
> "Vocês vêem? Vêem a história? Vêem alguma coisa? Me parece que estou tentando lhes contar um sonho -- fazendo uma tentativa inútil, porque nenhum relato de sonho pode transmitir a sensação de sonho, aquela mistura de absurdo, surpresa e espanto numa excitação de revolta tentando se impôr, aquela noção de ser tomado pelo incompreensível que é da própria essência dos sonhos..."
> Ele ficou em silêncio por alguns instantes.
> "... Não, é impossível; é impossível transmitir a sensação viva de qualquer época determinada de nossa existência -- aquela que constitui a sua verdade, o seu significado, a sua essência sutil e contundente. É impossível. Vivemos, como sonhamos -- sozinhos..."
* [Livros mencionados por Olavo de Carvalho](https://fiatjaf.com/livros-olavo.html)
* [Antiga _homepage_ Olavo de Carvalho](https://site.olavo.fiatjaf.com "Sapientiam autem non vincit malitia")
* [Bitcoin explicado de um jeito correto e inteligível](nostr:naddr1qqrky6t5vdhkjmspz9mhxue69uhkv6tpw34xze3wvdhk6q3q80cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsxpqqqp65wp3k3fu)
* [Reclamações](nostr:naddr1qqyrgwf4vseryvmxqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c9f9u03)
---
* [Nostr](-/tags/nostr)
* [Bitcoin](nostr:naddr1qqyryveexumnyd3kqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c7nywz4)
* [How IPFS is broken](nostr:naddr1qqyxgdfsxvck2dtzqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c8y87ll)
* [Programming quibbles](nostr:naddr1qqyrjvehxq6ngvpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cu05y0j)
* [Economics](nostr:naddr1qqyk2cm0dehk66trwvq3zamnwvaz7tmxd9shg6npvchxxmmdqgsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8grqsqqqa28clr866)
* [Open-source software](nostr:naddr1qqy8xmmxw3mkzun9qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cmyvl8h)
---
[Nostr](nostr:nprofile1qqsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8gpyfmhxue69uhkummnw3ez6an9wf5kv6t9vsh8wetvd3hhyer9wghxuet5fmsq8j) [GitHub](https://github.com/fiatjaf) [Telegram](https://t.me/fiatjaf) [Donate](lnurlp://zbd.gg/.well-known/lnurlp/fiatjaf)
-
In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
# Overview
1. Introduction
2. Model Types
3. Training (Data Collection and Config Settings)
4. Probability Viewing: AI Inspector
5. Match
6. Cheat Sheet
# I. Introduction
AI Arena is the first game that combines human and artificial intelligence collaboration.
AI learns your skills through "imitation learning."
## Official Resources
1. Official Documentation (Must Read): [Everything You Need to Know About AI Arena](https://docs.aiarena.io/everything-you-need-to-know)
Watch the 2-minute video in the documentation to quickly understand the basic flow of the game.
2. Official Play-2-Airdrop competition FAQ Site
https://aiarena.notion.site/aiarena/Gateway-to-the-Arena-52145e990925499d95f2fadb18a24ab0
3. Official Discord (Must Join): https://discord.gg/aiarenaplaytest for the latest announcements or seeking help. The team will also have a exclusive channel there.
4. Official YouTube: https://www.youtube.com/@aiarena because the game has built-in tutorials, you can choose to watch videos.
## What is this game about?
1. Although categorized as a platform fighting game, the core is a probability-based strategy game.
2. Warriors take actions based on probabilities on the AI Inspector dashboard, competing against opponents.
3. The game does not allow direct manual input of probabilities for each area but inputs information through data collection and establishes models by adjusting parameters.
4. Data collection emulates fighting games, but training can be completed using a Dummy As long as you can complete the in-game tutorial, you can master the game controls.
# II. Model Types
Before training, there are three model types to choose from: Simple Model Type, Original Model Type, and Advanced Model Type.
**It is recommended to try the Advanced Model Type after completing at least one complete training with the Simple Model Type and gaining some understanding of the game.**
![ModelType](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706794690011-YAKIHONNES3.png)
## Simple Model Type
The Simple Model is akin to completing a form, and the training session is comparable to filling various sections of that form.
This model has 30 buckets. Each bucket can be seen as telling the warrior what action to take in a specific situation. There are 30 buckets, meaning 30 different scenarios. Within the same bucket, the probabilities for direction or action are the same.
For example: What should I do when I'm off-stage — refer to the "Recovery (you off-stage)" bucket.
**For all buckets, refer to this official documentation:**
https://docs.aiarena.io/arenadex/game-mechanics/tabular-model-v2
**Video (no sound): The entire training process for all buckets**
https://youtu.be/1rfRa3WjWEA
Game version 2024.1.10. The method of saving is outdated. Please refer to the game updates.
## Advanced Model Type
The "Original Model Type" and "Advanced Model Type" are based on Machine Learning, which is commonly referred to as combining with AI.
The Original Model Type consists of only one bucket, representing the entire map. If you want the AI to learn different scenarios, you need to choose a "Focus Area" to let the warrior know where to focus. A single bucket means that a slight modification can have a widespread impact on the entire model. This is where the "Advanced Model Type" comes in.
The "Advanced Model Type" can be seen as a combination of the "Original Model Type" and the "Simple Model Type". The Advanced Model Type divides the map into 8 buckets. Each bucket can use many "Focus Area." For a detailed explanation of the 8 buckets and different Focus Areas, please refer to the tutorial page (accessible in the Advanced Model Type, after completing a training session, at the top left of the Advanced Config, click on "Tutorial").
![Tutorial](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706836749653-YAKIHONNES3.png)
# III. Training (Data Collection and Config Settings)
## Training Process:
1. **Collect Data**
2. **Set Parameters, Train, and Save**
3. **Repeat Step 1 until the Model is Complete**
Training the Simple Model Type is the easiest to start with; refer to the video above for a detailed process.
Training the Advanced Model Type offers more possibilities through the combination of "Focus Area" parameters, providing a higher upper limit. While the Original Model Type has great potential, it's harder to control. Therefore, this section focuses on the "Advanced Model Type."
## 1. What Kind of Data to Collect
1. **High-Quality Data:** Collect purposeful data. Garbage in, garbage out. Only collect the necessary data; don't collect randomly. It's recommended to use Dummy to collect data. However, don't pursue perfection; through parameter adjustments, AI has a certain level of fault tolerance.
2. **Balanced Data:** Balance your dataset. In simple terms, if you complete actions on the left side a certain number of times, also complete a similar number on the right side. While data imbalance can be addressed through parameter adjustments (see below), it's advised not to have this issue during data collection.
3. **Moderate Amount:** A single training will include many individual actions. Collect data for each action 1-10 times. Personally, it's recommended to collect data 2-3 times for a single action. If the effect of a single training is not clear, conduct a second (or even third) training with the same content, but with different parameter settings.
## 2. What to Collect (and Focus Area Selection)
Game actions mimic fighting games, consisting of 4 directions + 6 states (Idle, Jump, Attack, Grab, Special, Shield). Directions can be combined into ↗, ↘, etc. These directions and states can then be combined into different actions.
To make "Focus Area" effective, you need to collect data in training that matches these parameters. For example, for "Distance to Opponent", you need to collect data when close to the opponent and also when far away.
\* Note: While you can split into multiple training sessions, it's most effective to cover different situations within a single training.
Refer to the Simple Config, categorize the actions you want to collect, and based on the game scenario, classify them into two categories: "Movement" and "Combat."
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706801997176-YAKIHONNES3.png)
### Movement-Based Actions
#### Action Collection
When the warrior is offstage, regardless of where the opponent is, we require the warrior to return to the stage to prevent self-destruction.
This involves 3 aerial buckets: 5 (Near Blast Zone), 7 (Under Stage), and 8 (Side Of Stage).
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706803352081-YAKIHONNES3.png)
\* Note: The background comes from the Tutorial mentioned earlier. The arrows in the image indicate the direction of the action and are for reference only.
\* Note: Action collection should be clean; do not collect actions that involve leaving the stage.
#### Config Settings
In the Simple Config, you can directly choose "Movement" in it.
However, for better customization, it's recommended to use the Advanced Config directly.
- Intensity: The method for setting Intensity will be introduced separately later.
- Buckets: As shown in the image, choose the bucket you are training.
- Focus Area: Position-based parameters:
- Your position (must)
- Raycast Platform Distance, Raycast Platform Type (optional, generally choose these in Bucket 7)
### Combat-Based Actions
**The goal is to direct attacks quickly and effectively towards the opponent, which is the core of game strategy.**
This involves 5 buckets:
- 2 regular situations
- In the air: 6 (Safe Zone)
- On the ground: 4 (Opponent Active)
- 3 special situations on the ground:
- 1 Projectile Active
- 2 Opponent Knockback
- 3 Opponent Stunned
#### 2 Regular Situations
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706858648630-YAKIHONNES3.png)
In the in-game tutorial, we learned how to perform horizontal attacks. However, in the actual game, directions expand to 8 dimensions. Imagine having 8 relative positions available for launching hits against the opponent. Our task is to design what action to use for attack or defense at each relative position.
**Focus Area**
- Basic (generally select all)
- Angle to opponent
- Distance to opponent
- Discrete Distance: Choosing this option helps better differentiate between closer and farther distances from the opponent. As shown in the image, red indicates a relatively close distance, and green indicates a relatively distant distance.
- Advanced: Other commonly used parameters
- Direction: different facings to opponent
- Your Elemental Gauge and Discrete Elementals: Considering the special's charge
- Opponent action: The warrior will react based on the opponent's different actions.
- Your action: Your previous action. Choose this if teaching combos.
#### 3 Special Situations on the Ground
Projectile Active, Opponent Stunned, Opponent Knockback
These three buckets can be referenced in the Simple Model Type video. The parameter settings approach is the same as Opponent Active/Safe Zone.
For Projectile Active, in addition to the parameters based on combat, to track the projectile, you also need to select "Raycast Projectile Distance" and "Raycast Projectile On Target."
### 3. Setting "Intensity"
#### Resources
- The "Tutorial" mentioned earlier explains these parameters.
- Official Config Document (2022.12.24):
https://docs.google.com/document/d/1adXwvDHEnrVZ5bUClWQoBQ8ETrSSKgG5q48YrogaFJs/edit
---
#### TL;DR:
**Epochs:**
- Adjust to fewer epochs if learning is insufficient, increase for more learning.
**Batch Size:**
- Set to the minimum (16) if data is precise but unbalanced, or just want it to learn fast
- Increase (e.g., 64) if data is slightly imprecise but balanced.
- If both imprecise and unbalanced, consider retraining.
**Learning Rate:**
- Maximize (0.01) for more learning but a risk of forgetting past knowledge.
- Minimize for more accurate learning with less impact on previous knowledge.
**Lambda:**
- Reduce for prioritizing learning new things.
**Data Cleaning:**
- Enable "Remove Sparsity" unless you want AI to learn idleness.
- For special cases, like teaching the warrior to use special moves when idle, refer to this tutorial video: https://discord.com/channels/1140682688651612291/1140683283626201098/1195467295913431111
**Personal Experience:**
- Initial training with settings: 125 epochs, batch size 16, learning rate 0.01, lambda 0, data cleaning enabled.
- Prioritize Multistream, sometimes use Oversampling.
- Fine-tune subsequent training based on the mentioned theories.
# IV. Probability Viewing: AI Inspector
The dashboard consists of "Direction + Action."
Above the dashboard, you can see the "Next Action" – the action the warrior will take in its current state.
The higher the probability, the more likely the warrior is to perform that action, indicating a quicker reaction.
It's essential to note that when checking the Direction, the one with the highest visual representation may not have the highest numerical value. To determine the actual value, hover the mouse over the graphical representation, as shown below, where the highest direction is "Idle."
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706835513414-YAKIHONNES3.png)
In the map, you can drag the warrior to view the probabilities of the warrior in different positions. Right-click on the warrior with the mouse to change the warrior's facing. The status bar below can change the warrior's state on the map.
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8ce092d8ca3461fb4512e151d655a22b5b95ab09ad766683d84df9a3950c24ad/files/1706835605108-YAKIHONNES3.png)
When training the "Opponent Stunned, Opponent Knockback" bucket, you need to select the status below the opponent's status bar. If you are focusing on "Opponent action" in the Focus Zone, choose the action in the opponent's status bar. If you are focusing on "Your action" in the Focus Zone, choose the action in your own status bar. When training the "Projectile Active" Bucket, drag the projectile on the right side of the dashboard to check the status.
**Next**
The higher the probability, the faster the reaction. However, be cautious when the action probability reaches 100%. This may cause the warrior to be in a special case of "State Transition," resulting in unnecessary "Idle" states.
> Explanation:
> In each state a fighter is in, there are different "possible transitions". For example, from falling state you cannot do low sweep because low sweep requires you to be on the ground. For the shield state, we do not allow you to directly transition to headbutt. So to do headbutt you have to first exit to another state and then do it from there (assuming that state allows you to do headbutt). This is the reason the fighter runs because "run" action is a valid state transition from shield.
[Source](https://discord.com/channels/848599369879388170/1079903287760928819/1160049804844470292)
# V. Learn from Matches
After completing all the training, your model is preliminarily finished—congratulations! The warrior will step onto the arena alone and embark on its debut!
Next, we will learn about the strengths and weaknesses of the warrior from battles to continue refining the warrior's model.
In matches, besides appreciating the performance, pay attention to the following:
1. **Movement, i.e., Off the Stage:** Observe how the warrior gets eliminated. Is it due to issues in the action settings at a certain position, or is it a normal death caused by a high percentage? The former is what we need to avoid and optimize.
2. **Combat:** Analyze both sides' actions carefully. Observe which actions you and the opponent used in different states. Check which of your hits are less effective, and how does the opponent handle different actions, etc.
The approach to battle analysis is similar to the thought process in the "Training", helping to have a more comprehensive understanding of the warrior's performance and making targeted improvements.
# VI. Cheat Sheet
**Training**
1. Click "Collect" to collect actions.
2. "Map - Data Limit" is more user-friendly. Most players perform initial training on the "Arena" map.
3. Switch between the warrior and the dummy: Tab key (keyboard) / Home key (controller).
4. Use "Collect" to make the opponent loop a set of actions.
5. Instantly move the warrior to a specific location: Click "Settings" - SPAWN - Choose the desired location on the map - On. Press the Enter key (keyboard) / Start key (controller) during training.
**Inspector**
1. Right-click on the fighter to change their direction. Drag the fighter and observe the changes in different positions and directions.
2. When satisfied with the training, click "Save."
3. In "Sparring" and "Simulation," use "Current Working Model."
4. If satisfied with a model, then click "compete." The model used in the rankings is the one marked as "competing."
**Sparring / Ranked**
1. Use the Throneroom map only for the top 2 or top 10 rankings.
2. There is a 30-second cooldown between matches. The replays are played for any match. Once the battle begins, you can see the winner on the leaderboard or by right-clicking the page - Inspect - Console. Also, if you encounter any errors or bugs, please send screenshots of the console to the Discord server.
Good luck! See you on the arena!
-
# Nostr: a quick introduction, attempt #1
![](https://miro.medium.com/v2/resize:fit:1100/format:webp/0*TyaSRBLhkTNgEoIJ)
Nostr doesn't have a material existence, it is not a website or an app. Nostr is just a description what kind of messages each computer can send to the others and vice-versa. It's a very simple thing, but the fact that such description exists allows different apps to connect to different servers automatically, without people having to talk behind the scenes or sign contracts or anything like that.
When you use a Nostr _client_ that is what happens, your _client_ will connect to a bunch of servers, called _relays_, and all these _relays_ will speak the same "language" so your _client_ will be able to publish notes to them all and also download notes from other people.
That's basically what Nostr is: this communication layer between the _client_ you run on your phone or desktop computer and the _relay_ that someone else is running on some server somewhere. There is no central authority dictating who can connect to whom or even anyone who knows for sure where each note is stored.
If you think about it, Nostr is very much like the internet itself: there are millions of websites out there, and basically anyone can run a new one, and there are websites that allow you to store and publish your stuff on them.
The added benefit of Nostr is that this unified "language" that all Nostr _clients_ speak allow them to switch very easily and cleanly between _relays_. So if one _relay_ decides to ban someone that person can switch to publishing to others _relays_ and their audience will quickly follow them there. Likewise, it becomes much easier for _relays_ to impose any restrictions they want on their users: no _relay_ has to uphold a moral ground of "absolute free speech": each _relay_ can decide to delete notes or ban users for no reason, or even only store notes from a preselected set of people and no one will be entitled to complain about that.
There are some bad things about this design: on Nostr there are no guarantees that _relays_ will have the notes you want to read or that they will store the notes you're sending to them. We can't just assume all _relays_ will have everything — much to the contrary, as Nostr grows more _relays_ will exist and people will tend to publishing to a small set of all the _relays_, so depending on the decisions each _client_ takes when publishing and when fetching notes, users may see a different set of replies to a note, for example, and be confused.
Another problem with the idea of publishing to multiple servers is that they may be run by all sorts of malicious people that may edit your notes. Since no one wants to see garbage published under their name, Nostr fixes that by requiring notes to have a cryptographic signature. This signature is attached to the note and verified by everybody at all times, which ensures the notes weren't tampered (if any part of the note is changed even by a single character that would cause the signature to become invalid and then the note would be dropped). The fix is perfect, except for the fact that it introduces the requirement that each user must now hold this 63-character code that starts with "nsec1", which they must not reveal to anyone. Although annoying, this requirement brings another benefit: that users can automatically have the same identity in many different contexts and even use their Nostr identity to login to non-Nostr websites easily without having to rely on any third-party.
To conclude: Nostr is like the internet (or the internet of some decades ago): a little chaotic, but very open. It is better than the internet because it is structured and actions can be automated, but, like in the internet itself, nothing is guaranteed to work at all times and users many have to do some manual work from time to time to fix things. Plus, there is the cryptographic key stuff, which is painful, but cool.
-
# Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente.
- **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados.
- **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua.
- **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer.
- **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos.
- **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas.
- **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade).
- **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis.
- **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade.
- **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho.
- **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho.
- **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma.
- **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças.
- **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família.
- **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente.
- **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo).
- **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas.
- **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente).
- **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica.
- **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar.
- **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico.
- **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais.
- **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil).
- **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória.
- **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles.
- **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas.
- **inflação**: a inflação estimula o endividamento e o consumismo.
- **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas.
- **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
# Drivechain
Understanding Drivechain requires a shift from the paradigm most bitcoiners are used to. It is not about "trustlessness" or "mathematical certainty", but game theory and incentives. (Well, Bitcoin in general is also that, but people prefer to ignore it and focus on some illusion of trustlessness provided by mathematics.)
Here we will describe the basic mechanism (simple) and incentives (complex) of ["hashrate escrow"](https://github.com/bitcoin/bips/blob/master/bip-0300.mediawiki) and how it enables a 2-way peg between the mainchain (Bitcoin) and various sidechains.
The full concept of "Drivechain" also involves blind merged mining (i.e., the sidechains mine themselves by publishing their block hashes to the mainchain without the miners having to run the sidechain software), but this is much easier to understand and can be accomplished either by [the BIP-301 mechanism](https://github.com/bitcoin/bips/blob/master/bip-0301.mediawiki) or by [the Spacechains mechanism](https://gist.github.com/RubenSomsen/5e4be6d18e5fa526b17d8b34906b16a5).
## How does hashrate escrow work from the point of view of Bitcoin?
A new address type is created. Anything that goes in that is locked and can only be spent if all miners agree on the _Withdrawal Transaction_ (`WT^`) that will spend it for 6 months. There is one of these special addresses for each sidechain.
To gather miners' agreement `bitcoind` keeps track of the "score" of all transactions that could possibly spend from that address. On every block mined, for each sidechain, the miner can use a portion of their coinbase to either increase the score of one `WT^` by 1 while decreasing the score of all others by 1; or they can decrease the score of all `WT^`s by 1; or they can do nothing.
Once a transaction has gotten a score high enough, it is published and funds are effectively transferred from the sidechain to the withdrawing users.
If a timeout of 6 months passes and the score doesn't meet the threshold, that `WT^` is discarded.
## What does the above procedure _mean_?
It means that people can transfer coins from the mainchain to a sidechain by depositing to the special address. Then they can withdraw from the sidechain by making a special withdraw transaction in the sidechain.
The special transaction somehow freezes funds in the sidechain while a transaction that aggregates all withdrawals into a single mainchain `WT^`, which is then submitted to the mainchain miners so they can start voting on it and finally after some months it is published.
Now the crucial part: _the validity of the `WT^` is not verified by the Bitcoin mainchain rules_, i.e., if Bob has requested a withdraw from the sidechain to his mainchain address, but someone publishes a wrong `WT^` that instead takes Bob's funds and sends them to Alice's main address there is no way the mainchain will know that. What determines the "validity" of the `WT^` is the miner vote score and only that. It is the job of miners to vote correctly -- and for that they may want to run the sidechain node in SPV mode so they can attest for the existence of a reference to the `WT^` transaction in the sidechain blockchain (which then ensures it is ok) or do these checks by some other means.
## What? 6 months to get my money back?
Yes. But no, in practice anyone who wants their money back will be able to use an atomic swap, submarine swap or other similar service to transfer funds from the sidechain to the mainchain and vice-versa. The long delayed withdraw costs would be incurred by few liquidity providers that would gain some small profit from it.
## Why bother with this at all?
Drivechains solve many different problems:
### It enables experimentation and new use cases for Bitcoin
Issued assets, fully private transactions, stateful blockchain contracts, turing-completeness, decentralized games, some "DeFi" aspects, prediction markets, futarchy, decentralized and yet meaningful human-readable names, big blocks with a ton of normal transactions on them, a chain optimized only for Lighting-style networks to be built on top of it.
These are some ideas that may have merit to them, but were never _actually_ tried because they couldn't be tried with real Bitcoin or inferfacing with real bitcoins. They were either relegated to the shitcoin territory or to custodial solutions like Liquid or RSK that may have failed to gain network effect because of that.
### It solves conflicts and infighting
Some people want fully private transactions in a UTXO model, others want "accounts" they can tie to their name and build reputation on top; some people want simple multisig solutions, others want complex code that reads a ton of variables; some people want to put all the transactions on a global chain in batches every 10 minutes, others want off-chain instant transactions backed by funds previously locked in channels; some want to spend, others want to just hold; some want to use blockchain technology to solve all the problems in the world, others just want to solve money.
With Drivechain-based sidechains all these groups can be happy simultaneously and don't fight. Meanwhile they will all be using the same money and contributing to each other's ecosystem even unwillingly, it's also easy and free for them to change their group affiliation later, which reduces cognitive dissonance.
### It solves "scaling"
Multiple chains like the ones described above would certainly do a lot to accomodate many more transactions that the current Bitcoin chain can. One could have special Lightning Network chains, but even just big block chains or big-block-mimblewimble chains or whatnot could probably do a good job. Or even something less cool like 200 independent chains just like Bitcoin is today, no extra features (and you can call it "sharding"), just that would already multiply the current total capacity by 200.
Use your imagination.
### It solves the blockchain security budget issue
The calculation is simple: you imagine what security budget is reasonable for each block in a world without block subsidy and divide that for the amount of bytes you can fit in a single block: that is the price to be paid in _satoshis per byte_. In reasonable estimative, the price necessary for every Bitcoin transaction goes to very large amounts, such that not only any day-to-day transaction has insanely prohibitive costs, but also Lightning channel opens and closes are impracticable.
So without a solution like Drivechain you'll be left with only one alternative: pushing Bitcoin usage to trusted services like Liquid and RSK or custodial Lightning wallets. With Drivechain, though, there could be thousands of transactions happening in sidechains and being all aggregated into a sidechain block that would then pay a very large fee to be published (via blind merged mining) to the mainchain. Bitcoin security guaranteed.
### It keeps Bitcoin decentralized
Once we have sidechains to accomodate the normal transactions, the mainchain functionality can be reduced to be only a "hub" for the sidechains' comings and goings, and then the maximum block size for the mainchain can be reduced to, say, 100kb, which would make running a full node very very easy.
## Can miners steal?
Yes. If a group of coordinated miners are able to secure the majority of the hashpower and keep their coordination for 6 months, they can publish a `WT^` that takes the money from the sidechains and pays to themselves.
## Will miners steal?
No, because the incentives are such that they won't.
Although it may look at first that stealing is an obvious strategy for miners as it is free money, there are many costs involved:
1. The cost of **ceasing blind-merged mining returns** -- as stealing will kill a sidechain, all the fees from it that miners would be expected to earn for the next years are gone;
2. The cost of **Bitcoin price going down**: If a steal is successful that will mean Drivechains are not safe, therefore Bitcoin is less useful, and miner credibility will also be hurt, which are likely to cause the Bitcoin price to go down, which in turn may kill the miners' businesses and savings;
3. The cost of **coordination** -- assuming miners are just normal businesses, they just want to do their work and get paid, but stealing from a Drivechain will require coordination with other miners to conduct an immoral act in a way that has many pitfalls and is likely to be broken over the months;
4. The cost of **miners leaving your mining pool**: when we talked about "miners" above we were actually talking about mining pools operators, so they must also consider the risk of miners migrating from their mining pool to others as they begin the process of stealing;
5. The cost of **community goodwill** -- when participating in a steal operation, a miner will suffer a ton of backlash from the community. Even if the attempt fails at the end, the fact that it was attempted will contribute to growing concerns over exaggerated miners power over the Bitcoin ecosystem, which may end up causing the community to agree on a hard-fork to change the mining algorithm in the future, or to do something to increase participation of more entities in the mining process (such as development or cheapment of new ASICs), which have a chance of decreasing the profits of current miners.
Another point to take in consideration is that one may be inclined to think a newly-created sidechain or a sidechain with relatively low usage may be more easily stolen from, since the blind merged mining returns from it (point 1 above) are going to be small -- but the fact is also that a sidechain with small usage will also have less money to be stolen from, and since the other costs besides 1 are less elastic at the end it will not be worth stealing from these too.
All of the above consideration are valid only if miners are stealing from _good sidechains_. If there is a sidechain that is doing things wrong, scamming people, not being used at all, or is full of bugs, for example, that will be perceived as a bad sidechain, and then miners can and will safely steal from it and kill it, which will be perceived as a good thing by everybody.
## What do we do if miners steal?
Paul Sztorc has suggested in the past that a user-activated soft-fork could prevent miners from stealing, i.e., most Bitcoin users and nodes issue a rule [similar to this one](https://twitter.com/LukeDashjr/status/1126221228182843398) to invalidate the inclusion of a faulty `WT^` and thus cause any miner that includes it in a block to be relegated to their own Bitcoin fork that other nodes won't accept.
This suggestion has made people think Drivechain is a sidechain solution _backed by user-actived soft-forks for safety_, which is very far from the truth. Drivechains must not and will not rely on this kind of soft-fork, although they are possible, as the coordination costs are too high and no one should ever expect these things to happen.
If even with all the incentives against them (see above) miners do still steal from a _good sidechain_ that will mean _the failure of the Drivechain experiment_. It will very likely also mean _the failure of the Bitcoin experiment_ too, as it will be proven that miners can coordinate to act maliciously over a prolonged period of time regardless of economic and social incentives, meaning they are probably in it just for attacking Bitcoin, backed by nation-states or something else, and therefore no Bitcoin transaction in the mainchain is to be expected to be safe ever again.
## Why use this and not a full-blown trustless and open sidechain technology?
Because it is impossible.
If you ever heard someone saying "just use a sidechain", "do this in a sidechain" or anything like that, be aware that these people are either talking about "federated" sidechains (i.e., funds are kept in custody by a group of entities) or they are talking about Drivechain, or they are disillusioned and think it is possible to do sidechains in any other manner.
### No, I mean a trustless 2-way peg with correctness of the withdrawals verified by the Bitcoin protocol!
That is not possible unless Bitcoin verifies all transactions that happen in all the sidechains, which would be akin to drastically increasing the blocksize and expanding the Bitcoin rules in tons of ways, i.e., a terrible idea that no one wants.
### What about the Blockstream sidechains whitepaper?
Yes, that was a way to do it. The Drivechain hashrate escrow is a conceptually simpler way to achieve the same thing with improved incentives, less junk in the chain, more safety.
## Isn't the hashrate escrow a very complex soft-fork?
Yes, but it is much simpler than SegWit. And, unlike SegWit, it doesn't force anything on users, i.e., it isn't a mandatory blocksize increase.
## Why should we expect miners to care enough to participate in the voting mechanism?
Because it's in their own self-interest to do it, and it costs very little. Today over half of the miners mine RSK. It's not blind merged mining, it's a [very convoluted process that requires them to run a RSK full node](https://developers.rsk.co/rsk/architecture/mining/implementation-guide/). For the Drivechain sidechains, an SPV node would be enough, or maybe just getting data from a block explorer API, so much much simpler.
## What if I still don't like Drivechain even after reading this?
That is the entire point! You don't have to like it or use it as long as you're fine with other people using it. The hashrate escrow special addresses will not impact you at all, validation cost is minimal, and you get the benefit of people who want to use Drivechain migrating to their own sidechains and freeing up space for you in the mainchain. See also the point above about infighting.
## See also
* [Podcast episode with Ruben Somsen and Aaron van Wirdum explaining Drivechain](https://www.youtube.com/watch?v=DhU6nsB5Z-0)
* [Alternatives to Drivechain](nostr:naddr1qqyrqenzvvukvcfkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823csjg2t6)
* [Drivechain comparison with Ethereum](nostr:naddr1qqyx2dp58qcx2wpjqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cane7px)
-
# `bitcoind` decentralization
It is better to have multiple curator teams, with different vetting processes and release schedules for `bitcoind` than a single one.
"More eyes on code", "Contribute to Core", "Everybody should audit the code".
All these points repeated again and again fell to Earth on the day it was discovered that Bitcoin Core developers merged a variable name change from "blacklist" to "blocklist" without even discussing or acknowledging the fact that that innocent pull request opened by a sybil account was a social attack.
After a big lot of people manifested their dissatisfaction with that event on Twitter and on GitHub, most Core developers simply ignored everybody's concerns or even personally attacked people who were complaining.
The event has shown that:
1) Bitcoin Core ultimately rests on the hands of a couple maintainers and they decide what goes on the GitHub repository[^pr-merged-very-quickly] and the binary releases that will be downloaded by thousands;
2) Bitcoin Core is susceptible to social attacks;
2) "More eyes on code" don't matter, as these extra eyes can be ignored and dismissed.
## Solution: `bitcoind` decentralization
If usage was spread across 10 different `bitcoind` flavors, the network would be much more resistant to social attacks to a single team.
This has nothing to do with the question on if it is better to have multiple different Bitcoin node implementations or not, because here we're basically talking about the same software.
Multiple teams, each with their own release process, their own logo, some subtle changes, or perhaps no changes at all, just a different name for their `bitcoind` flavor, and that's it.
Every day or week or month or year, each flavor merges all changes from Bitcoin Core on their own fork. If there's anything suspicious or too leftist (or perhaps too rightist, in case there's a leftist `bitcoind` flavor), maybe they will spot it and not merge.
This way we keep the best of both worlds: all software development, bugfixes, improvements goes on Bitcoin Core, other flavors just copy. If there's some non-consensus change whose efficacy is debatable, one of the flavors will merge on their fork and test, and later others -- including Core -- can copy that too. Plus, we get resistant to attacks: in case there is an attack on Bitcoin Core, only 10% of the network would be compromised. the other flavors would be safe.
## Run Bitcoin Knots
The first example of a `bitcoind` software that follows Bitcoin Core closely, adds some small changes, but has an independent vetting and release process is [Bitcoin Knots][knots], maintained by the incorruptible Luke DashJr.
Next time you decide to run `bitcoind`, run Bitcoin Knots instead and contribute to `bitcoind` decentralization!
---
### See also:
- [How to attack Bitcoin, Anthony Towns' take](nostr:naddr1qqyrywphxdskzwp5qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cwx779x)
[^pr-merged-very-quickly]: See [PR 20624](https://github.com/bitcoin/bitcoin/pull/20624), for example, a very complicated change that [could be introducing bugs or be a deliberate attack](http://www.erisian.com.au/wordpress/2021/01/07/bitcoin-in-2021), merged in 3 days without time for discussion.
[knots]: https://bitcoinknots.org/
-
# On "zk-rollups" applied to Bitcoin
ZK rollups make no sense in bitcoin because there is no "cheap calldata". all data is already ~~cheap~~ expensive calldata.
There could be an onchain zk verification that allows succinct signatures maybe, but never a rollup.
What happens is: you can have one UTXO that contains multiple balances on it and in each transaction you can recreate that UTXOs but alter its state using a zk to compress all internal transactions that took place.
The blockchain must be aware of all these new things, so it is in no way "L2".
And you must have an entity responsible for that UTXO and for conjuring the state changes and zk proofs.
But on bitcoin you also must keep the data necessary to rebuild the proofs somewhere else, I'm not sure how can the third party responsible for that UTXO ensure that happens.
I think such a construct is similar to a credit card corporation: one central party upon which everybody depends, zero interoperability with external entities, every vendor must have an account on each credit card company to be able to charge customers, therefore it is not clear that such a thing is more desirable than solutions that are truly open and interoperable like Lightning, which may have its defects but at least fosters a much better environment, bringing together different conflicting parties, custodians, anyone.
-
# The problem with ION
[ION](https://techcommunity.microsoft.com/t5/identity-standards-blog/ion-we-have-liftoff/ba-p/1441555) is a [DID method](nostr:naddr1qqyrjwrpv93rjcf4qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cuxp7vx) based on a thing called "Sidetree".
I can't say for sure what is the problem with ION, because I don't understand the design, even though I have read all I could and asked everybody I knew. All available information only touches on the high-level aspects of it (and of course its amazing wonders) and no one has ever bothered to explain the details. I've also asked the main designer of the protocol, Daniel Buchner, but he may have thought I was trolling him on Twitter and refused to answer, instead pointing me to an incomplete spec on the Decentralized Identity Foundation website that I had already read before. I even tried to join the DIF as a member so I could join their closed community calls and hear what they say, maybe eventually ask a question, so I could understand it, but my entrance was ignored, then after many months and a nudge from another member I was told I had to do a KYC process to be admitted, which I refused.
**One thing I know is**:
- ION is supposed to provide a way to _rotate keys_ seamlessly and automatically without losing the main identity (and the ION proponents also claim there are no "master" keys because these can also be rotated).
- ION is also _not a blockchain_, i.e. it doesn't have a deterministic consensus mechanism and it is decentralized, i.e. anyone can publish data to it, doesn't have to be a single central server, there may be holes in the available data and the protocol doesn't treat that as a problem.
- From all we know about years of attempts to scale Bitcoins and develop offchain protocols it is clear that _you can't solve the double-spend problem without a central authority or a kind of blockchain_ (i.e. a decentralized system with deterministic consensus).
- _Rotating keys also suffer from the double-spend problem_: whenever you rotate a key it is as if it was "spent", you aren't supposed to be able to use it again.
The logic conclusion of the 4 assumptions above is that ION is flawed: it can't provide the key rotation it says it can if it is not a blockchain.
## See also
- [Excerpt of discussion about DIDs and ION](nostr:naddr1qqyrydtpx33nsvpcqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ccx33ee)
-
# rosetta.alhur.es
A service that grabs code samples from two chosen languages on [RosettaCode](http://rosettacode.org/wiki/Rosetta_Code) and displays them side-by-side.
The code-fetching is done in real time and snippet-by-snippet (there is also a prefetch of which snippets are available in each language, so we only compare apples to apples).
This was my first Golang web application if I remember correctly.
- <https://rosetta.alhur.es/>
- <https://github.com/fiatjaf/rosetta.alhur.es>
-
# Thoughts on Nostr key management
On [Why I don't like NIP-26 as a solution for key management](nostr:naddr1qqyrgceh89nxgdmzqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ctgmx78) I talked about multiple techniques that could be used to tackle the problem of key management on Nostr.
Here are some ideas that work in tandem:
- [NIP-41](https://github.com/nostr-protocol/nips/blob/master/41.md) (stateless key invalidation)
- [NIP-46](https://github.com/nostr-protocol/nips/blob/master/46.md) (Nostr Connect)
- [NIP-07](https://github.com/nostr-protocol/nips/blob/master/07.md) (signer browser extension)
- [Connected hardware signing devices](https://lnbits.github.io/nostr-signing-device/installer/)
- other things like musig or frostr keys used in conjunction with a semi-trusted server; or other kinds of trusted software, like a dedicated signer on a mobile device that can sign on behalf of other apps; or even a separate protocol that some people decide to use as the source of truth for their keys, and some clients might decide to use that automatically
- there are probably many other ideas
Some premises I have in my mind (that may be flawed) that base my thoughts on these matters (and cause me to not worry too much) are that
- For the vast majority of people, Nostr keys aren't a target as valuable as Bitcoin keys, so they will probably be ok even without any solution;
- Even when you lose everything, identity can be recovered -- slowly and painfully, but still --, unlike money;
- Nostr is not trying to replace all other forms of online communication (even though when I think about this I can't imagine one thing that wouldn't be nice to replace with Nostr) or of offline communication, so there will always be ways.
- For the vast majority of people, losing keys and starting fresh isn't a big deal. It is a big deal when you have followers and an online persona and your life depends on that, but how many people are like that? In the real world I see people deleting social media accounts all the time and creating new ones, people losing their phone numbers or other accounts associated with their phone numbers, and not caring very much -- they just find a way to notify friends and family and move on.
We can probably come up with some specs to ease the "manual" recovery process, like social attestation and explicit signaling -- i.e., Alice, Bob and Carol are friends; Alice loses her key; Bob sends a new Nostr event kind to the network saying what is Alice's new key; depending on how much Carol trusts Bob, she can automatically start following that and remove the old key -- or something like that.
---
One nice thing about some of these proposals, like NIP-41, or the social-recovery method, or the external-source-of-truth-method, is that they don't have to be implemented in any client, they can live in standalone single-purpose microapps that users open or visit only every now and then, and these can then automatically update their follow lists with the latest news from keys that have changed according to multiple methods.
-
# lnurl-auth explained
You may have seen the [lnurl-auth](https://github.com/btcontract/lnurl-rfc/blob/master/lnurl-auth.md) spec or heard about it, but might not know how it works or what is its relationship with other [lnurl](https://github.com/fiatjaf/awesome-lnurl) protocols. This document attempts to solve that.
## Relationship between lnurl-auth and other lnurl protocols
First, **what is the relationship of lnurl-auth with other lnurl protocols?** The answer is none, except the fact that they all share the lnurl format for specifying `https` URLs.
In fact, lnurl-auth is very unique in the sense that it doesn't even need a Lightning wallet to work, it is a standalone authentication protocol that can work anywhere.
## How does it work
Now, **how does it work?** The basic idea is that each wallet has a seed, which is a random value (you may think of the BIP39 seed words, for example). Usually from that seed different keys are derived, each of these yielding a Bitcoin address, and also from that same seed may come the keys used to generate and manage Lightning channels.
What lnurl-auth does is to generate a new key from that seed, and from that a new key for each service (identified by its domain) you try to authenticate with.
![lnurl-auth per-service key derivation illustrated](static/lnurlauth-keys.png)
That way, you effectively have a new identity for each website. Two different services cannot associate your identities.
**The flow goes like this:** When you visit a website, the website presents you with a QR code containing a _callback URL_ and a _challenge_. The challenge should be a random value.
![lnurl-auth services issuing challenges](static/lnurlauth-challenge.png)
When your wallet scans or opens that QR code it uses the _domain_ in the callback URL plus the _main lnurl-auth key_ to derive a key specific for that website, uses that key to sign the challenge and then sends both the public key specific for that for that website plus the signed challenge to the specified URL.
![lnurl-auth services receiving signatures from wallet](static/lnurlauth-signature.png)
When the service receives the public key it checks it against the challenge signature and start a session for that user. The user is then **identified only by its public key**. If the service wants it can, of course, request more details from the user, associate it with an internal id or username, it is free to do anything. lnurl-auth's goals end here: no passwords, maximum possible privacy.
# FAQ
* What is the advantage of tying this to Bitcoin and Lightning?
One big advantage is that your wallet is already keeping track of one seed, it is already a precious thing. If you had to keep track of a separate auth seed it would be arguably worse, more difficult to bootstrap the protocol, and arguably one of the reasons similar protocols, past and present, weren't successful.
* Just signing in to websites? What else is this good for?
No, it can be used for authenticating to installable apps and physical places, as long as there is a service running an HTTP server somewhere to read the signature sent from the wallet. But yes, signing in to websites is the main problem to solve here.
* Phishing attack! Can a malicious website proxy the QR from a third website and show it to the user to it will steal the signature and be able to login on the third website?
No, because the wallet will only talk to the the callback URL, and it will either be controlled by the third website, so the malicious won't see anything; or it will have a different domain, so the wallet will derive a different key and frustrate the malicious website's plan.
* I heard [SQRL](https://sqrl.grc.com/) had that same idea and it went nowhere.
Indeed. SQRL in its first version was basically the same thing as lnurl-auth, with one big difference: it was vulnerable to phishing attacks (see above). That was basically the only criticism it got everywhere, so the protocol creators decided to solve that by introducing complexity to the protocol. While they were at it they decided to add more complexity for managing accounts and so many more crap that in the the spec which initially was a single page ended up becoming 136 pages of highly technical gibberish. Then all the initial network effect it had, libraries and apps were trashed and nowadays no one can do anything with it (but, [see](https://sqrl.grc.com/threads/developer-documentation-conflicted-and-confusing-please-help-clarify.951/), there are still people who love the protocol writing in a 90's forum with no clue of anything besides their own Java).
* We don't need this, we need WebAuthn!
[WebAuthn](https://webauthn.guide/) is essentially the same thing as lnurl-auth, but instead of being simple it is complex, instead of being open and decentralized it is centralized in big corporations, and instead of relying on a key generated by your own device it requires an expensive hardware HSM you must buy and trust the manufacturer. If you like WebAuthn and you like Bitcoin you should like lnurl-auth much more.
* What about [BitID](https://github.com/bitid/bitid)?
This is another one that is [very similar](https://www.youtube.com/watch?v=3eepEWTnRTc) to lnurl-auth, but without the anti-phishing prevention and extra privacy given by making one different key for each service.
* What about LSAT?
It doesn't compete with lnurl-auth. LSAT, as far as I understand it, is for when you're buying individual resources from a server, not authenticating as a user. Of course, LSAT can be repurposed as a general authentication tool, but then it will lack features that lnurl-auth has, like the property of having keys generated independently by the user from a common seed and a standard way of passing authentication info from one medium to another (like signing in to a website at the desktop from the mobile phone, for example).
-
# A biblioteca infinita
Agora esqueci o nome do conto de Jorge Luis Borges em que a tal biblioteca é descrita, ou seus detalhes específicos. Eu tinha lido o conto e nunca havia percebido que ele matava a questão da aleatoriedade ser capaz de produzir coisas valiosas. Precisei mesmo da [Wikipédia](https://en.wikipedia.org/wiki/Infinite_monkey_theorem) me dizer isso.
Alguns anos atrás levantei essa questão para um grupo de amigos sem saber que era uma questão tão batida e baixa. No meu exemplo era um cachorro andando sobre letras desenhadas e não um macaco numa máquina de escrever. A minha conclusão da discussão foi que não importa o que o cachorro escrevesse, sem uma inteligência capaz de compreender aquilo nada passaria de letras aleatórias.
Borges resolve tudo imaginando uma biblioteca que contém tudo o que o cachorro havia escrito durante todo o infinito em que fez o experimento, e portanto contém todo o conhecimento sobre tudo e todas as obras literárias possíveis -- mas entre cada página ou frase muito boa ou pelo menos legívei há toneladas de livros completamente aleatórios e uma pessoa pode passar a vida dentro dessa biblioteca que contém tanto conhecimento importante e mesmo assim não aprender nada porque nunca vai achar os livros certos.
> Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.
Tenho a impressão de que a publicação gigantesca de artigos, posts, livros e tudo o mais está transformando o mundo nessa biblioteca. Há tanta coisa pra ler que é difícil achar o que presta. As pessoas precisam parar de escrever.
-
# A Causa
o Princípios de Economia Política de Menger é o único livro que enfatiza a CAUSA o tempo todo. os cientistas todos parecem não saber, ou se esquecer sempre, que as coisas têm causa, e que o conhecimento verdadeiro é o conhecimento da causa das coisas.
a causa é uma categoria metafísica muito superior a qualquer correlação ou resultado de teste de hipótese, ela não pode ser descoberta por nenhum artifício econométrico ou reduzida à simples antecedência temporal estatística. a causa dos fenômenos não pode ser provada cientificamente, mas pode ser conhecida.
o livro de Menger conta para o leitor as causas de vários fenômenos econômicos e as interliga de forma que o mundo caótico da economia parece adquirir uma ordem no momento em que você lê. é uma sensação mágica e indescritível.
quando eu te o recomendei, queria é te imbuir com o espírito da busca pela causa das coisas. depois de ler aquilo, você está apto a perceber continuidade causal nos fenômenos mais complexos da economia atual, enxergar as causas entre toda a ação governamental e as suas várias consequências na vida humana. eu faço isso todos os dias e é a melhor sensação do mundo quando o caos das notícias do caderno de Economia do jornal -- que para o próprio jornalista que as escreveu não têm nenhum sentido (tanto é que ele escreve tudo errado) -- se incluem num sistema ordenado de causas e consequências.
provavelmente eu sempre erro em alguns ou vários pontos, mas ainda assim é maravilhoso. ou então é mais maravilhoso ainda quando eu descubro o erro e reinsiro o acerto naquela racionalização bela da ordem do mundo econômico que é a ordem de Deus.
_em scrap para T.P._
-
# IPFS problems: Shitcoinery
IPFS was advertised to the Ethereum community since the beggining as a way to "store" data for their "dApps". I don't think this is harmful in any way, but for some reason it may have led IPFS developers to focus too much on Ethereum stuff. Once I watched a talk showing libp2p developers – despite being ignored by the Ethereum team (that ended up creating their own agnostic p2p library) – dedicating an enourmous amount of work on getting a libp2p app running in the browser talking to a normal Ethereum node.
The always somewhat-abandoned "Awesome IPFS" site is a big repository of "dApps", some of which don't even have their landing page up anymore, useless Ethereum smart contracts that for some reason use IPFS to store whatever the useless data their users produce.
Again, per se it isn't a problem that Ethereum people are using IPFS, but it is at least confusing, maybe misleading, that when you search for IPFS most of the use-cases are actually Ethereum useless-cases.
## See also
* [Bitcoin](nostr:naddr1qqyryveexumnyd3kqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c7nywz4), the only non-shitcoin
-
# Why IPFS cannot work, again
Imagine someone comes up with a solution for P2P content-addressed data-sharing that involves storing all the files' contents in all computers of the network. That wouldn't work, right? Too much data, if you think this can work then you're a BSV enthusiast.
Then someone comes up with the idea of not storing everything in all computers, but only some things on some computers, based on some algorithm to determine what data a node would store given its pubkey or something like that. Still wouldn't work, right? Still too much data no matter how much you spread it, but mostly incentives not aligned, would implode in the first day.
Now imagine someone says they will do the same thing, but instead of storing the full contents each node would only store a pointer to where each data is actually available. Does that make it better? Hardly so. Still, you're just moving the problem.
This is IPFS.
Now you have less data on each computer, but on a global scale that is still a lot of data.
No incentives.
And now you have the problem of finding the data. First if you have some data you want the world to access you have to broadcast information about that, flooding the network -- and everybody has to keep doing this continuously for every single file (or shard of file) that is available.
And then whenever someone wants some data they must find the people who know about that, which means they will flood the network with requests that get passed from peer to peer until they get to the correct peer.
The more you force each peer to store the worse it becomes to run a node and to store data on behalf of others -- but the less your force each peer to store the more flooding you'll have on the global network, and the slower will be for anyone to actually get any file.
---
But if everybody just saves everything to Infura or Cloudflare then it works, magic decentralized technology.
## Related
- [How IPFS is broken](nostr:naddr1qqyxgdfsxvck2dtzqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c8y87ll)
-
# Who will build the roads?
Who will build the roads? Em Lagoa Santa, as mais novas e melhores ruas -- que na verdade acabam por formar enormes teias de bairros que se interligam -- são construídas pelos loteadores que querem as ruas para que seus lotes valham mais -- e querem que outras pessoas usem as ruas também. Também são esses mesmos loteadores que colocam os postes de luz e os encanamentos de água, não sem antes terem que se submeter a extorsões de praxe praticadas por COPASA e CEMIG.
Se ao abrir um loteamento, condomínio, prédio um indivíduo ou uma empresa consegue sem muito problema passar rua, eletricidade, água e esgoto, por que não seria possível existir livre-concorrência nesses mercados? Mesmo aquela velha estória de que é ineficiente passar cabos de luz duplicados para que companhias elétricas possam competir já me parece bobagem.
-
# How IPFS is broken
I once fell for this talk about "content-addressing". It sounds very nice. You know a certain file exists, you know there are probably people who have it, but you don't know where or if it is hosted on a domain somewhere. With content-addressing you can just say "start" and the download will start. You don't have to care.
Other magic properties that address common frustrations: webpages don't go offline, links don't break, valuable content always finds its way, other people will distribute your website for you, any content can be transmitted easily to people near you without anyone having to rely on third-party centralized servers.
But you know what? Saying a thing is good doesn't automatically make it possible and working. For example: saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.
And what is the solution for that? A DHT!
DHT?
----
Turns out DHTs have terrible incentive structure (as you would expect, no one wants to hold and serve data they don't care about to others for free) and the IPFS experience proves it doesn't work even in a small network like the IPFS of today.
If you have run an IPFS client you'll notice how much it clogs your computer. Or maybe you don't, if you are very rich and have a really powerful computer, but still, it's not something suitable to be run on the entire world, and on web pages, and servers, and mobile devices. I imagine there may be a lot of unoptimized code and technical debt responsible for these and other problems, but the DHT is certainly the biggest part of it. IPFS can open up to 1000 connections by default and suck up all your bandwidth -- and that's just for exchanging keys with other DHT peers.
Even if you're in the "client" mode and limit your connections you'll still get overwhelmed by connections that do stuff I don't understand -- and it makes no sense to run an IPFS node as a client, that defeats the entire purpose of making every person host files they have and content-addressability in general, centralizes the network and brings back the dichotomy client/server that IPFS was created to replace.
Connections?
------------
So, DHTs are a fatal flaw for a network that plans to be big and interplanetary. But that's not the only problem.
Finding content on IPFS is the _most slow experience ever_ and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with `scp` -- that's considering that IPFS managed to find the other machine, otherwise your command will just be stuck for days.
Now even if you ignore that IPFS objects should be content-addressable and not location-addressable and, knowing which peer has the content you want, you go there and explicitly tell IPFS to connect to the peer directly, maybe you can get some seconds of (slow) download, but then IPFS will drop the connection and the download will stop. Sometimes -- but not always -- it helps to add the peer address to your bootstrap nodes list (but notice this isn't something you should be doing at all).
IPFS Apps?
----------
Now consider the kind of marketing IPFS does: it tells people to build "apps" on IPFS. It sponsors "databases" on top of IPFS. It basically advertises itself as a place where developers can just connect their apps to and all users will automatically be connected to each other, data will be saved somewhere between them all and immediately available, everything will work in a peer-to-peer manner.
Except it doesn't work that way at all. "libp2p", the IPFS library for connecting people, is broken and is rewritten every 6 months, but they keep their beautiful landing pages that say everything works magically and you can just plug it in. I'm not saying they should have everything perfect, but at least they should be honest about what they truly have in place.
It's impossible to connect to other people, after years there's no js-ipfs and go-ipfs interoperability (and yet they advertise there will be python-ipfs, haskell-ipfs, whoknowswhat-ipfs), connections get dropped and many other problems.
So basically all IPFS "apps" out there are just apps that want to connect two peers but can't do it manually because browsers and the IPv4/NAT network don't provide easy ways to do it and WebRTC is hard and requires servers. They have nothing to do with "content-addressing" anything, they are not trying to build "a forest of merkle trees" nor to distribute or archive content so it can be accessed by all. I don't understand why IPFS has changed its core message to this "full-stack p2p network" thing instead of the basic content-addressable idea.
IPNS?
-----
And what about the database stuff? How can you "content-address" a database with values that are supposed to change? Their approach is to just save all values, past and present, and then use new DHT entries to communicate what are the newest value. This is the IPNS thing.
Apparently just after coming up with the idea of content-addressability IPFS folks realized this would never be able to replace the normal internet as no one would even know what kinds of content existed or when some content was updated -- and they didn't want to coexist with the normal internet, they wanted to replace it all because this message is more bold and gets more funding, maybe?
So they invented IPNS, the name system that introduces location-addressability back into the system that was supposed to be only content-addressable.
And how do they manage to do it? Again, DHTs. And does it work? Not really. It's limited, slow, much slower than normal content-addressing fetches, most of the times it doesn't even work after hours. But still although developers will tell it is not working yet the IPFS marketing will talk about it as if it was a thing.
Archiving content?
------------------
The main use case I had for IPFS was to store content that I personally cared about and that other people might care too, like old articles from dead websites, and videos, sometimes entire websites before they're taken down.
So I did that. Over many months I've archived stuff on IPFS. The IPFS API and CLI don't make it easy to track where stuff are. The `pin` command doesn't help as it just throws your pinned hash in a sea of hashes and subhashes and you're never able to find again what you have pinned.
The IPFS daemon has a fake filesystem that is half-baked in functionality but allows you to locally address things by names in a tree structure. Very hard to update or add new things to it, but still doable. It allows you to give names to hashes, basically. I even began to write a wrapper for it, but suddenly after many weeks of careful content curation and distribution all my entries in the fake filesystem were gone.
Despite not having lost any of the files I _did_ lose everything, as I couldn't find them in the sea of hashes I had in my own computer. After some digging and help from IPFS developers I managed to recover a part of it, but it involved hacks. My things vanished because of a bug at the fake filesystem. The bug was fixed, but soon after I experienced a similar (new) bug. After that I even tried to build a service for hash archival and discovery, but as all the problems listed above began to pile up I eventually gave up. There were also problems of content canonicalization, the code the IPFS daemon use to serve default HTML content over HTTP, problems with the IPFS browser extension and others.
Future-proof?
-------------
One of the core advertised features of IPFS was that it made content future-proof. I'm not sure they used this expression, but basically you have content, you hash that, you get an address that never expires for that content, now everybody can refer to the same thing by the same name. Actually, it's better: content is split and hashed in a merkle-tree, so there's fine-grained deduplication, people can store only chunks of files and when a file is to be downloaded lots of people can serve it at the same time, like torrents.
But then come the protocol upgrades. IPFS has used different kinds of hashing algorithms, different ways to format the hashes, and will change the default algorithm for building the merkle-trees, so basically the same content now has a gigantic number of possible names/addresses, which defeats the entire purpose, and yes, files hashed using different strategies aren't automagically compatible.
Actually, the merkle algorithm could have been changed by each person on a file-by-file basis since the beginning (you could for example split a book file by chapter or page instead of by chunks of bytes) -- although probably no one ever did that. I know it's not easy to come up with the perfect hashing strategy in the first go, but the way these matters are being approached make me wonder that IPFS promoters aren't really worried about future-proof, or maybe we're just in Beta phase forever.
Ethereum?
---------
This is also a big problem. IPFS is built by Ethereum enthusiasts. I can't read the mind of people behind IPFS, but I would imagine they have a poor understanding of incentives like the Ethereum people, and they tend towards scammer-like behavior like getting a ton of funds for investors in exchange for promises they don't know they can fulfill (like Filecoin and IPFS itself) based on half-truths, changing stuff in the middle of the road because some top-managers decided they wanted to change (move fast and break things) and squatting fancy names like "distributed web".
The way they market IPFS (which is not the main thing IPFS was initially designed to do) as a "peer-to-peer cloud" is very seductive for Ethereum developers just like Ethereum itself is: as a place _somewhere_ that will run your code for you so you don't have to host a server or have any responsibility, and then Infura will serve the content to everybody. In the same vein, Infura is also hosting and serving IPFS content for Ethereum developers these days for free. Ironically, just like the Ethereum hoax peer-to-peer money, IPFS peer-to-peer network may begin to work better for end users as things get more and more centralized.
### More about IPFS problems:
* [IPFS problems: Too much immutability](nostr:naddr1qqyrqen9xf3nvdpeqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cmdjnnj)
* [IPFS problems: General confusion](nostr:naddr1qqyr2wf4xvcrwvnyqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823chvx94h)
* [IPFS problems: Shitcoinery](nostr:naddr1qqyxxdpev5cnsvpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cta4a2e)
* [IPFS problems: Community](nostr:naddr1qqyrxdmrvvengdmrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cekkch5)
* [IPFS problems: Pinning](nostr:naddr1qqyrgvf3xcenydesqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c9ckstx)
* [IPFS problems: Conceit](nostr:naddr1qqyxyeryx93kxv3nqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cwxek0q)
* [IPFS problems: Inefficiency](nostr:naddr1qqyr2dp3vsmx2vpsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ckklcy6)
* [IPFS problems: Dynamic links](nostr:naddr1qqyrvcnx8y6nwwtpqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cz8tlwh)
### See also
* [A crappy course on torrents](nostr:naddr1qqyrwdfevfjnxefcqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cuskhxf), on the protocol that has done most things right
* [The Tragedy of IPFS in a series of links](https://mobile.twitter.com/fiatjaf/status/1289246273669697536), an ongoing Twitter thread.
-
# Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte
Ao descer de seu ônibus na rodoviária de Belo Horizonte às 4 e pouco da manhã, darás de frente para um caubói que toma cerveja em seus trajes típicos em um bar no setor mesmo de desembarque. Suba a escada à direita que dá no estacionamento da rodoviária. Vire à esquerda e caminhe por mais ou menos 400 metros, atravessando uma área onde pessoas suspeitas -- mas provavelmente dormindo em pé -- lhe observam, e então uma pracinha ocupada por um clã de mendigos. Ao avistar um enorme obelisco no meio de um cruzamento de duas avenidas, vire à esquerda e caminhe por mais 400 metros. Você verá uma enorme, antiga e bela estação com uma praça em frente, com belas fontes aqüáticas. Corra dali e dirija-se a um pedaço de rua à direita dessa praça. Um velho palco de antigos carnavais estará colocado mais ou menos no meio da simpática ruazinha de parelepípedos: é onde você pegará seu próximo ônibus.
Para entrar na estação é necessário ter um cartão com créditos recarregáveis. Um viajante prudente deixa sempre um pouco de créditos em seu cartão a fim de evitar filas e outros problemas de indisponibilidade quando chega cansado de viagem, com pressa ou em horários incomuns. Esse tipo de pessoa perceberá que foi totalmente ludibriado ao perceber que que os créditos do seu cartão, abastecido quando de sua última vinda a Belo Horizonte, há três meses, pereceram de prazo de validade e foram absorvidos pelos cofre públicos. Terá, portanto, que comprar mais créditos. O guichê onde os cartões são abastecidos abre às 5h, mas não se espante caso ele não tenha sido aberto ainda quando o primeiro ônibus chegar, às 5h10.
Com alguma sorte, um jovem de moletom, autorizado por dois ou três fiscais do sistema de ônibus que conversam alegremente, será o operador da catraca. Ele deixa entrar sem pagar os bêbados, os malandros, os pivetes. Bastante empático e perceptivo do desespero dos outros, esse bom rapaz provavelmente também lhe deixará entrar sem pagar.
Uma vez dentro do ônibus, não se intimide com os gritalhões e valentões que, ofendidíssimos com o motorista por ele ter parado nas estações, depois dos ônibus anteriores terem ignorado esses excelsos passageiros que nelas aguardavam, vão aos berros tirar satisfação.
O ponto final do ônibus, 40 minutos depois, é o terminal Morro Alto. Lá você verá, se procurar bem entre vários ônibus e pessoas que despertam a sua mais honesta suspeita, um veículo escuro, apagado, numerado **5882** e que abrigará em seu interior um motorista e um cobrador que descansam o sono dos justos.
Aguarde na porta por mais uns vinte minutos até que, repentinamente desperto, o motorista ligue o ônibus, abra as portas e já comece, de leve, a arrancar. Entre correndo, mas espere mais um tempo, enquanto as pessoas que têm o cartão carregado passem e peguem os melhores lugares, até que o cobrador acorde e resolva te cobrar a passagem nesse velho meio de pagamento, outrora o mais líqüído, o dinheiro.
Este último ônibus deverá levar-lhe, enfim, a Lagoa Santa.
-
# Jofer
Jofer era um jogador diferente. À primeira vista não, parecia igual, um volante combativo, perseguia os atacantes adversários implacavelmente, um bom jogador. Mas não era essa a característica que diferenciava Jofer. Jofer era, digamos, um chutador.
Começou numa semifinal de um torneio de juniores. O time de Jofer precisava do empate e estava sofrendo uma baita pressão do adversário, mas o jogo estava 1 a 1 e parecia que ia ficar assim mesmo, daquele jeito futebolístico que parece, parece mesmo. Só que aos 46 do segundo tempo tomaram um gol espírita, Ruizinho do outro time saiu correndo pela esquerda e, mesmo sendo canhoto, foi cortando para o meio, os zagueiros meio que achando que já tinha acabado mesmo, devia ter só mais aquele lance, o árbitro tinha dado dois minutos, Ruizinho chutou, marcou e o goleiro, que só pulou depois que já tinha visto que não ia ter jeito, ficou xingando.
A bola saiu do meio e tocaram para Jofer, ninguém nem veio marcá-lo, o outro time já estava comemorando, e com razão, o juiz estava de sacanagem em fazer o jogo continuar, já estava tudo acabado mesmo. Mas não, estava certo, mais um minuto de acréscimo, justo. Em um minuto dá pra fazer um gol. Mas como? Jofer pensou nas partidas da NBA em que com alguns centésimos de segundo faltando o armador jogava de qualquer jeito para a cesta e às vezes acertava. De trás do meio de campo, será? Não vou ter nem força pra fazer chegar no gol. Vou virar piada, melhor tocar pro Fumaça ali do lado e a gente perde sem essa humilhação no final. Mas, poxa, e daí? Vou tentar mesmo assim, qualquer coisa eu falo que foi um lançamento e daqui a uns dias todo mundo esquece. Olhou para o próprio pé, virou ele de ladinho, pra fora e depois pra dentro (bom, se eu pegar daqui, direitinho, quem sabe?), jogou a bola pro lado e bateu. A bola subiu escandalosamente, muito alta mesmo, deve ter subido uns 200 metros. Jofer não tinha como ter a menor noção. Depois foi descendo, o goleirão voltando correndo para debaixo da trave e olhando pra bola, foi chegando e pulando já só pra acompanhar, para ver, dependurado no travessão, a bola sair ainda bem alta, ela bateu na rede lateral interna antes de bater no chão, quicar violentamente e estufar a rede no alto do lado direito de quem olhava.
Mas isso tudo foi sonho do Jofer. Sonhou acordado, numa noite em que demorou pra dormir, deitado na sua cama. Ficou pensando se não seria fácil, se ele treinasse bastante, acertar o gol bem de longe, tipo no sonho, e se não dava pra fazer gol assim. No dia seguinte perguntou a Brunildinho, o treinador de goleiros. Era difícil defender essas bolas, ainda mais se elas subissem muito, o goleiro ficava sem perspectiva, o vento alterava a trajetória a cada instante, tinha efeito, ela cairia rápido, mas claro que não valia à pena treinar isso, a chance de acertar o gol era minúscula. Mas Jofer só ia tentar depois que treinasse bastante e comprovasse o que na sua imaginação parecia uma excelente idéia.
Começou a treinar todos os dias. Primeiro escondido, por vergonha dos colegas, chegava um pouco antes e ficava lá, chutando do círculo central. Ao menor sinal de gente se aproximando, parava e ia catar as bolas. Depois, quando começou a acertar, perdeu a vergonha. O pessoal do clube todo achava engraçado quando via Jofer treinando e depois ouvia a explicação da boca de alguém, ninguém levava muito a sério, mas também não achava de todo ridículo. O pessoal ria, mas no fundo torcia praquilo dar certo, mesmo.
Aconteceu que num jogo que não valia muita coisa, empatezinho feio, aos 40 do segundo tempo, a marcação dos adversários já não estava mais pressionando, todo mundo contente com o empate e com vontade de parar de jogar já, o Henrique, meia-esquerdo, humilde, mas ainda assim um pouco intimidante para Jofer (jogava demais), tocou pra ele. Vai lá, tenta sua loucura aí. Assumiu a responsabilidade do nosso volante introspectivo. Seria mais verossímil se Jofer tivesse errado, primeira vez que tentou, restava muito tempo ainda pra ele ter a chance de ser herói, ninguém acerta de primeira, mas ele acertou. Quase como no sonho, Lucas, o goleiro, não esperava, depois que viu o lance, riu-se, adiantou-se para pegar a bola que ele julgava que quicaria na área, mas ela foi mais pra frente, mais e mais, daí Lucas já estava correndo, só que começou a pensar que ela ia pra fora, e ele ia só se dependurar no travessão e fazer seu papel de estar na bola. Acabou que por conta daquele gol eles terminaram em segundo no grupo daquele torneiozinho, ao invés de terceiro, e não fez diferença nenhuma.
-
# The flaw of "just use paypal/coinbase" arguments
For the millionth time I read somewhere that "custodial bitcoin is not bitcoin" and that "if you're going to use custodial, better use Paypal". No, actually it was "better use Coinbase", but I had heard the "PayPal" version in the past.
There are many reasons why using PayPal is not the same as using a custodial Bitcoin service or wallet that are obvious and not relevant here, such as the fact that you can't have Bitcoin balances on Bitcoin (or maybe now you can? but you can't send it around); plus all the reasons that are also valid for Coinbase such as you having to give all your data and selfies of yourself and your government documents and so on -- but let's ignore these reasons for now.
The most important reason why it isn't the same thing is that when you're using Coinbase you are stuck in Coinbase. Your Coinbase coins cannot be used to pay anyone that isn't in Coinbase. So Coinbase-style custodianship doesn't help Bitcoin. If you want to move out of Coinbase you have to withdraw from Coinbase.
Custodianship on Lightning is of a very different nature. You can pay people from other custodial platforms and people that are hosting their own Lightning nodes and so on.
That kind of custodianship doesn't do any harm to anyone, doesn't fracture the network, doesn't reduce the network effect of Lightning, in fact it increases it.
-
# My personal experience (as a complete ignorant) of the blocksize debate in 2017
In the beginning of 2017 I didn't know Bitcoin was having a "blocksize debate". I had stopped paying attention to Bitcoin in 2014 after reading Tim Swanson's book on shitcoineiry and was surprise people even care about Bitcoin still while Ethereum and other fancy things were around.
My introduction to the subject was this interview with Andrew Stone and Andrew Clifford from Bitcoin Unlimited (still don't know who these guys are). I've listened to it and kinda liked the conspiracy theory about "a group of developers trying, against miners and users, to control the whole ecosystem by not allowing blocks to grow" (actually, if you listen to this interview that announced the creation of Blockstream and the sidechains whitepaper it does sound like a government agent bribing all the Core developers into forming a consortium that will turn Bitcoin into an Ethereum-like shitcoin under their control -- but this is just a useless digression).
Some time later I listened to this interview with Jimmy Song and was introduced to two hard forks and conspiracies and New York Agreement and got excited because I didn't care about Bitcoin (I'm ashamed to remember this feeling) and wanted to see things changing, people fighting, Bitcoin burning, for no reason. Oddly, what I grasped from the interview was that Jimmy Song was defending the agreement and expecting everybody to fulfill it.
When the day actually come and "Bitcoin Cash" forked I looked at it with pity because it looked clearly a failure from the beginning, but I still cheered for it a bit, still not knowing anything about the debate, besides the fact that blocks were bigger on BCH, which looked like a very reductionist explanation to me.
"Of course it's not just making blocks bigger, that would be too simple, they probably have a very complex plan I'm not apt to understand", I thought.
To my surprise the entire argument was actually just that: bigger blocks bigger blocks. I came to that conclusion by listening to tomwoods.com/1064, a debate in which reasonable arguments faced childish claims. That debate gave me perspective and was a clear, undisputed win from Jameson Lopp against Roger Ver.
Actually some time before that I had listened to another Tom Woods Show episode thinking it was going to be an episode about Bitcoin, but in fact it was just propaganda about a debate I had almost forgotten. And nothing about Bitcoin, everything about "Bitcoin Cash" and how there were two Bitcoins, one legitimate and the other unlegitimate.
So, from the perspective of someone that came to the debate totally fresh and only listens to the big-blocker arguments for a long time, they still don't convince anyone with some common sense (as I would like to think of myself), they just sound like mad dogs and everything goes against themselves.
---
Fast forward to the present and with much more understanding of the issues in place I started digging some material from 2016-2017 about the debate to try to get more context, and found this ridiculous interview with Mike Hearn. It isn't a waste of time to listen to it if you're not familiar with the debate from that time.
As I should have probably expected from my experience with Epicenter.tv, both the interviewers agree with Mike Hearn about his ridiculous claims about how (not his words) we have to subsidize the few thousand current Bitcoin users by preventing fees from increase and there are no trade-offs to doing that -- and even with everybody agreeing they all manage to sound stupid. There's not a single phrase that is defendable in the entire interview, no criticisms make any sense, it makes me feel bad for the the guy as he feels so self-assured and obviouslyright.
After knowing about these and other adventures of stupid people with high influences in the Bitcoin world trying to impose their idiocy on others it feels even more odd and unexpected to find Bitcoin in the right track. Generally in politics the most dumb wins, but apparently not in Bitcoin.
Bitcoin is a miracle.
-
# idea: Per-paragraph paywalls
Using the lnurl-allowance protocol, a website could instead of putting a paywall over the entire site, charge a reader for only the paragraphs they read. Of course this requires trust from the reader on the website, but this is normal. The website could just hide the rest of the article before an invoice from the paragraph just read was paid.
This idea came from Colin from the _Unhashed Podcast_.
Could also work with podcasts and videos.
-
# SummaDB
This was a hierarchical database server similar to the original Firebase. Records were stored on a LevelDB on different paths, like:
- `/fruits/banana/color`: `yellow`
- `/fruits/banana/flavor`: `sweet`
And could be queried by path too, using HTTP, for example, a call to `http://hostname:port/fruits/banana`, for example, would return a JSON document like
```json
{
"color": "yellow",
"flavor": "sweet"
}
```
While a call to `/fruits` would return
```json
{
"banana": {
"color": "yellow",
"flavor": "sweet"
}
}
```
`POST`, `PUT` and `PATCH` requests also worked.
In some cases the values would be under a special `"_val"` property to disambiguate them from paths. (I may be missing some other details that I forgot.)
GraphQL was also supported as a query language, so a query like
```graphql
query {
fruits {
banana {
color
}
}
}
```
would return `{"fruits": {"banana": {"color": "yellow"}}}`.
## SummulaDB
SummulaDB was a browser/JavaScript build of SummaDB. It ran on the same Go code compiled with GopherJS, and using PouchDB as the storage backend, if I remember correctly.
It had replication between browser and server built-in, and one could replicate just subtrees of the main tree, so you could have stuff like this in the server:
```json
{
"users": {
"bob": {},
"alice": {}
}
}
```
And then only allow Bob to replicate `/users/bob` and Alice to replicate `/users/alice`. I am sure the require auth stuff was also built in.
There was also a PouchDB plugin to make this process smoother and data access more intuitive (it would hide the `_val` stuff and allow properties to be accessed directly, today I wouldn't waste time working on these hidden magic things).
## The computed properties complexity
The next step, which I never managed to get fully working and caused me to give it up because of the complexity, was the ability to automatically and dynamically compute materialized properties based on data in the tree.
The idea was partly inspired on CouchDB computed views and how limited they were, I wanted a thing that would be super powerful, like, given
```json
{
"matches": {
"1": {
"team1": "A",
"team2": "B",
"score": "2x1",
"date": "2020-01-02"
},
"1": {
"team1": "D",
"team2": "C",
"score": "3x2",
"date": "2020-01-07"
}
}
}
```
One should be able to add a computed property at `/matches/standings` that computed the scores of all teams after all matches, for example.
I tried to complete this in multiple ways but they were all adding much more complexity I could handle. Maybe it would have worked better on a more flexible and powerful and functional language, or if I had more time and patience, or more people.
## Screenshots
This is just one very simple unfinished admin frontend client view of the hierarchical dataset.
![admin view screenshot](https://user-images.githubusercontent.com/1653275/212715675-e7218456-ec0e-46eb-b6b8-9ee8e1ded71e.png)
- https://github.com/fiatjaf/summadb
- https://github.com/fiatjaf/summuladb
- https://github.com/fiatjaf/pouch-summa
-
# Parallel Chains
We want merged-mined blockchains. We want them because it is possible to do things in them that aren't doable in the normal Bitcoin blockchain because it is rightfully too expensive, but there are other things beside the world money that could benefit from a "distributed ledger" -- just like people believed in 2013 --, like issued assets and domain names (just the most obvious examples).
On the other hand we can't have -- like people believed in 2013 -- a copy of Bitcoin for every little idea with its own native token that is mined by proof-of-work and must get off the ground from being completely valueless into having some value by way of a miracle that operated only once with Bitcoin.
It's also not a good idea to have blockchains with custom merged-mining protocol (like Namecoin and Rootstock) that require Bitcoin miners to run their software and be an active participant and miner for that other network besides Bitcoin, because it's too cumbersome for everybody.
Luckily [Ruben Somsen invented this protocol for blind merged-mining](https://gist.github.com/RubenSomsen/5e4be6d18e5fa526b17d8b34906b16a5) that solves the issue above. Although it doesn't solve the fact that each parallel chain still needs some form of "native" token to pay miners -- or it must use another method that doesn't use a native token, such as trusted payments outside the chain.
## How does it work
With the `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT` soft-fork[^eltoo] it becomes possible to create presigned transactions that aren't related to any previous UTXO.
Then you create a long sequence of transactions (sufficient to last for many many years), each with an `nLockTime` of 1 and each spending the next (you create them from the last to the first). Since their `scriptSig` (the unlocking script) will use `SIGHASH_ANYPREVOUT` you can obtain a transaction id/hash that doesn't include the previous TXO, you can, for example, in a sequence of transactions `A0-->B` (B spends output 0 from A), include the signature for "spending A0 on B" inside the `scriptPubKey` (the locking script) of "A0".
With the contraption described above it is possible to make that long string of transactions everybody will know (and know how to generate) but each transaction can only be spent by the next previously decided transaction, no matter what anyone does, and there always must be at least one block of difference between them.
Then you combine it with `RBF`, `SIGHASH_SINGLE` and `SIGHASH_ANYONECANPAY` so parallel chain miners can add inputs and outputs to be able to compete on fees by including their own outputs and getting change back while at the same time writing a hash of the parallel block in the change output and you get everything working perfectly: everybody trying to spend the same output from the long string, each with a different parallel block hash, only the highest bidder will get the transaction included on the Bitcoin chain and thus only one parallel block will be mined.
## See also
- [Drivechain](nostr:naddr1qq9xgunfwejkx6rpd9hqzythwden5te0ve5kzar2v9nzucm0d5pzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vaqvzqqqr4gumtjfnp)
[^eltoo]: The same thing used in [Eltoo](nostr:naddr1qqyxvenyvejnwdejqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c6qlqxc).
-
# Comprimido desodorante
No episódio sei-lá-qual de Aleixo FM Bruno Aleixo diz que os bêbados sempre têm as melhores idéias e daí conta uma idéia que ele teve quando estava bêbado: um comprimido que funciona como desodorante. Ao invés de passar o desodorante spray ou roll-on a pessoa pode só tomar o comprimido e pronto, é muito mais prático e no tempo de frio a pessoa pode vestir a roupa mais rápido, sem precisar ficar passando nada com o tronco todo nu. Quando o Busto lhe pergunta sobre a possibilidade de algo assim ser fabricado ele diz que não sabe, que não é cientista, só tem as idéias.
Essa passagem tão boba de um programa de humor esconde uma verdade sobre a doutrina cientística que permeia a sociedade. A doutrina segundo a qual é da ciência que vêm as inovações tecnológicas e de todos os tipos, e por isso é preciso que o Estado tire dinheiro das pessoas trabalhadoras e dê para os cientistas. Nesse ponto ninguém mais sabe o que é um cientista, foi-se toda a concretude, ficou só o nome: "cientista". Daí vão procurar o tal cientista, é um cara que se formou numa universidade e está fazendo um mestrado. Pronto, é só dar dinheiro pra esse cara e tudo vai ficar bom.
Tirando o problema da desconexão entre realidade e a tese, existe também, é claro, o problema da tese: não faz sentido, que um cientista fique procurando formas de realizar uma idéia, que não se sabe nem se é possível nem se é desejável, que ele ou outra pessoa tiveram, muito pelo contrário (mas não vou dizer aqui o que é que era para o cientista fazer porque isso seria contraditório e eu não acho que devam nem existir cientistas).
O que eu queria dizer mesmo era: todo o aparato científico da nossa sociedade, todos os departamentos, universidades, orçamentos e bolsas e revistas, tudo se resume a um monte de gente tentando descobrir como fazer um comprimido desodorante.
-
# Zettelkasten
<https://writingcooperative.com/zettelkasten-how-one-german-scholar-was-so-freakishly-productive-997e4e0ca125> (um artigo meio estúpido, mas útil).
Esta incrível técnica de salvar notas sem categorias, sem pastas, sem hierarquia predefinida, mas apenas fazendo referências de uma nota à outra e fazendo supostamente surgir uma ordem (ou heterarquia, disseram eles) a partir do caos parece ser o que faltava pra eu conseguir anotar meus pensamentos e idéias de maneira decente, veremos.
Ah, e vou usar esse tal [`neuron`](https://github.com/srid/neuron) que também gera sites a partir das notas?, acho que vai ser bom.
-
# `OP_CHECKTEMPLATEVERIFY` and the "covenants" drama
There are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
---
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
# Flowi.es
At the time I thought [Workflowy][workflowy] had the ideal UI for everything. I wanted to implement my [custom app maker](nostr:naddr1qqyxgcejv5unzd33qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cz3va32) on it, but ended up doing this: a platform for enhancing Workflowy with extra features:
- An email reminder based on dates input in items
- A website generator, similar to [Websites For Trello](nostr:naddr1qqyrydpkvverwvehqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c9d4yku), also based on [Classless Templates](nostr:naddr1qqyxyv35vymk2vfsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cqwgdau)
Also, I didn't remember this was also based on CouchDB and had some _couchapp_ functionalities.
![screenshot](https://camo.githubusercontent.com/d3f904a4b01eb613796ace0c33ca101b2fea8199/68747470733a2f2f617263686976652e69732f76414938352f396539323735353334373761643235633364643666343766626635313636666163666534366162632f7363722e706e67)
- <https://flowi.es>
- <https://github.com/fiatjaf/flowies>
[workflowy]: <https://workflowy.com/>
-
# On the state of programs and browsers
Basically, there are basically (not exhaustively) 2 kinds of programs one can run in a computer nowadays:
1.1. A program that is installed, permanent, has direct access to the Operating System, can draw whatever it wants, modify files, interact with other programs and so on;
1.2. A program that is transient, fetched from someone else's server at run time, interpreted, rendered and executed by another program that bridges the access of that transient program to the OS and other things.
Meanwhile, web browsers have basically (not exhaustively) two use cases:
2.1. Display text, pictures, videos hosted on someone else's computer;
2.2. Execute incredibly complex programs that are fetched at run time, executed and so on -- you get it, it's the same 1.2.
These two use cases for browsers are at big odds with one another. While stretching itsel
f to become more and more a platform for programs that can do basically anything (in the 1.1 sense) they are still restricted to being an 1.2 platform. At the same time, websites that were supposed to be on 2.1 sometimes get confused and start acting as if they were 2.2 -- and other confusing mixed up stuff.
I could go hours in philosophical inquiries on the nature of browsers, how rewriting everything in JavaScript is not healthy or where everything went wrong, but I think other people have done this already.
One thing that bothers me a lot, though, is that computers can do a lot of things, and with the internet and in the current state of the technology it's fairly easy to implement tools that would help in many aspects of human existence and provide high-quality, useful programs, with the help of a server to coordinate access, store data, authenticate users and so on many things are possible. However, due to the nature of UI in the browser, it's very hard to get any useful tool to users.
Writing a UI, even the most basic UI imaginable (some text input boxes and some buttons, or a table) can take a long time, always more than the time necessary to code the actual core features of whatever program is being developed -- and that is considering that the person capable of writing interesting programs that do the functionality in the backend are also capable of interacting with JavaScript and the giant amount of frameworks, transpilers, styling stuff, CSS, the fact that all this is built on top of HTML and so on.
This is not good.
-
# A big Ethereum problem that is fixed by Drivechain
While reading the following paragraphs, assume Drivechain itself will be a "smart contract platform", like Ethereum. And that it won't be used to launch an Ethereum blockchain copy, but instead **each different Ethereum contract could be turned into a different sidechain** under [BIP300](https://bips.xyz/300) rules.
## A big Ethereum problem
Anyone can publish any "contract" to Ethereum. Often people will come up with somewhat interesting ideas and publish them. Since they want money they will add an unnecessary token and use that to bring revenue to themselves, gamify the usage of their contract somehow, and keep some control over the supposedly open protocol they've created by keeping a majority of the tokens. They will use the profits on marketing and branding, have a visual identity, a central website and a forum with support personnel and so on: their _somewhat interesting idea_ have become a full-fledged company.
If they have success then another company will appear in the space and copy the idea, launch it using exactly the same strategy with a tweak, then try to capture the customers of the first company and new people. And then another, and another, and another. Very often these contracts require some network effect to work, i.e., they require people to be using it so others will use it. The fact that the market is now split into multiple companies offering roughly the same product hurts that, such that none of these protocols get ever enough usage to become _really_ useful in the way they were first conceived. At this point it doesn't matter though, they get some usage, and they use that in their marketing material. It becomes a race to pump the value of the tokens and the current usage is just another point used for that purpose. The company will even start giving out money to attract new users and other weird moves that have no relationship with the initial somewhat intereting idea.
Once in a lifetime it happens that the first implementer of these things is not a company seeking profits, but some altruistic developer or company that believes in Ethereum and wants to see it grow -- or more likely someone financed by the Ethereum Foundation, which allegedly doesn't like these token schemes and would prefer everybody to use the token they issued first, the ETH --, but that's a fruitless enterprise because someone else will copy that idea anyway and turn it into a company as described above.
## How Drivechain fixes it
In the [Drivechain](nostr:naddr1qq9xgunfwejkx6rpd9hqzythwden5te0ve5kzar2v9nzucm0d5pzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vaqvzqqqr4gumtjfnp) world, if someone had an idea, they would -- as it happens all the time with Bitcoin things -- publish it in a public forum. Other members of the community would evaluate that idea, add or remove things, all interested parties would contribute to make it the best possible incarnation of that idea. Once the design was settled, someone would volunteer to start writing the code to turn that idea into a sidechain. Maybe some company would fund those efforts and then more people would join. It's not a perfect process and one that often involves altruism, but Bitcoin inspires people to do these things.
Slowly, the thing would get built, tested, activated as a sidechain on testnet, tested more, and at this point luckily the entire community of interested Bitcoin users and miners would have grown to like that idea and see its benefits. It could then be proposed to be activated according to [BIP300](https://bips.xyz/300) rules.
Once it was activated, the entire pool of interested users would join it. And it would be impossible for someone else to create a copy of that because everybody would instantly notice it was a copy. There would be no token, no one profiting directly from the operations of that "smart contract". And everybody would be incentivized to join and tell others to join that same sidechain since the network effect was already the biggest there, they will know more network effect would only be good for everybody involved, and there would be no competing marketing and free token giveaways from competing entities.
## See also
- [Upgrading 'Smart Contracts' to 'Wise Contracts'](https://www.truthcoin.info/blog/wise-contracts/), by Paul Sztorc
- [Drivechain](nostr:naddr1qq9xgunfwejkx6rpd9hqzythwden5te0ve5kzar2v9nzucm0d5pzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vaqvzqqqr4gumtjfnp)
- [Drivechain comparison with Ethereum](nostr:naddr1qqyx2dp58qcx2wpjqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cane7px)
- [Alternatives to Drivechain](nostr:naddr1qqyrqenzvvukvcfkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823csjg2t6)
-
# A response to Achim Warner's "Drivechain brings politics to miners" article
I mean this article: https://achimwarner.medium.com/thoughts-on-drivechain-i-miners-can-do-things-about-which-we-will-argue-whether-it-is-actually-a5c3c022dbd2
There are basically two claims here:
### 1. Some corporate interests might want to secure sidechains for themselves and thus they will bribe miners to have these activated
First, it's hard to imagine why they would want such a thing. Are they going to make a proprietary KYC chain only for their users? They could do that in a corporate way, or with a federation, like Facebook tried to do, and that would provide more value to their users than a cumbersome pseudo-decentralized system in which they don't even have powers to issue currency. Also, if Facebook couldn't get away with their federated shitcoin because the government was mad, what says the government won't be mad with a sidechain? And finally, why would Facebook want to give custody of their proprietary closed-garden Bitcoin-backed ecosystem coins to a random, open and always-changing set of miners?
But even if they do succeed in making their sidechain and it is very popular such that it pays miners fees and people love it. Well, then why not? Let them have it. It's not going to hurt anyone more than a proprietary shitcoin would anyway. If Facebook really wants a closed ecosystem backed by Bitcoin that probably means we are winning big.
### 2. Miners will be required to vote on the validity of debatable things
He cites the example of a PoS sidechain, an assassination market, a sidechain full of nazists, a sidechain deemed illegal by the US government and so on.
There is a simple solution to all of this: just kill these sidechains. Either miners can take the money from these to themselves, or they can just refuse to engage and freeze the coins there forever, or they can even give the coins to governments, if they want. It is an entirely good thing that evil sidechains or sidechains that use horrible technology that doesn't even let us know who owns each coin get annihilated. And it was the responsibility of people who put money in there to evaluate beforehand and know that PoS is not deterministic, for example.
About government censoring and wanting to steal money, or criminals using sidechains, I think the argument is very weak because these same things can happen today and may even be happening already: i.e., governments ordering mining pools to not mine such and such transactions from such and such people, or forcing them to reorg to steal money from criminals and whatnot. All this is expected to happen in normal Bitcoin. But both in normal Bitcoin and in Drivechain decentralization fixes that problem by making it so governments cannot catch all miners required to control the chain like that -- and in fact fixing that problem is the only reason we need decentralization.
-
# neuron.vim
I started using this [neuron][neuron] thing to create an update this same [zettelkasten](nostr:naddr1qqyrwwfh8yurgefnqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c7qmjrw), but the [existing vim plugin](https://github.com/ihsanturk/neuron.vim) had too many problems, so I forked it and ended up changing almost everything.
Since the upstream repository was somewhat abandoned, most users and people who were trying to contribute upstream migrate to my fork too.
- <https://github.com/fiatjaf/neuron.vim>
[neuron]: https://github.com/srid/neuron
-
# Module Linker
![](https://raw.githubusercontent.com/fiatjaf/module-linker/gh-pages/screenshot/python-screencast.gif)
A browser extension that reads source code on GitHub and tries to find links to imported dependencies so you can click on them and navigate through either GitHub or package repositories or base language documentation. Works for many languages at different levels of completeness.
- <https://github.com/fiatjaf/module-linker>
- <https://module-linker.alhur.es/>
- <https://addons.mozilla.org/firefox/addon/module-linker>
- <https://chrome.google.com/webstore/detail/dglofghjinifeolcpjfjmfdnnbaanggn>
-
# Using Spacechains and Fedimint to solve scaling
What if instead of trying to create complicated "layer 2" setups involving noveau cryptographic techniques we just did the following:
- we take that Fedimint source code and remove the "mint" stuff, and just use their federation stuff secure coins with multisig;
- then we make a spacechain;
- and we make the federations issue multisig-btc tokens on it;
- and then we put some uniswap-like thing in there to allow these tokens to be exchanged freely.
## Why?
The recent spike in fees caused by Ordinals and BRC-20 shitcoinery has shown that Lightning isn't a silver bullet. Channels are too fragile, it costs a lot to open a channel under a high fee environment, to run a routing node and so on.
People who want to keep using Lightning are instead flocking to the big Lightning custodial providers: WalletofSatoshi, ZEBEDEE, OpenNode and so on. We could leverage that trust people have in these companies (and individuals) operating shadow Lightning providers and turn each of these into a btc-token issuer. Each issue their own token, transactions flow freely. Each person can hold only assets from the issuers they trust more.
-
# Just malinvestiment
Traditionally the Austrian Theory of Business Cycles has been explained and reworked in many ways, but the most widely accepted version (or the closest to the Mises or Hayek views) view is that banks (or the central bank) cause the general interest rate to decline by creation of new money and that prompts entrepreneurs to invest in projects of longer duration. This can be confusing because sometimes entrepreneurs embark in very short-time projects during one of these bubbles and still contribute to the overall cycle.
The solution is to think about the "longer term" problem is to think of the entire economy going long-term, not individual entrepreneurs. So if one entrepreneur makes an investiment in a thing that looks simple he may actually, knowingly or not, be inserting himself in a bigger machine that is actually involved in producing longer-term things. Incidentally this thinking also solves the biggest criticism of the Austrian Business Cycle Theory: that of the rational expectations people who say: "oh but can't the entrepreneurs know that the interest rate is artificially low and decide to not make long-term investiments?" ("and if they don't know they should lose money and be replaced like in a normal economy flow blablabla?"). Well, the answer is that they are not really relying on the interest rate, they are only looking for profit opportunities, and this is the key to another confusion that has always followed my thinkings about this topic.
If a guy opens a bar in an area of a town where many new buildings are being built during a "housing bubble" he may not know, but he is inserting himself right into the eye of that business cycle. He expects all these building projects to continue, and all the people involved in that to be getting paid more and be able to spend more at his bar and so on. That is a bet that may or may not end up paying.
Now what does that bar investiment has to do with the interest rate? Nothing. It is just a guy who saw a business opportunity in a place where hungry people with money had no bar to buy things in, so he opened a bar. Additionally the guy has made some calculations about all the ending, starting and future building projects in the area, and then the people that would live or work in that area afterwards (after all the buildings were being built with the expectation of being used) and so on, there is no interest rate calculations involved. And yet that may be a malinvestiment because some building projects will end up being canceled and the expected usage of the finished ones will turn out to be smaller than predicted.
This bubble may have been caused by a decline in interest rates that prompted some people to start buying houses that they wouldn't otherwise, but this is just a small detail. The bubble can only be kept going by a constant influx of new money into the economy, but the focus on the interest rate is wrong. If new money is printed and used by the government to buy ships then there will be a boom and a bubble in the ship market, and that involves all the parts of production process of ships and also bars that will be opened near areas of the town where ships are built and new people are being hired with higher salaries to do things that will eventually contribute to the production of ships that will then be sold to the government.
It's not interest rates or the length of the production process that matters, it's just printed money and malinvestiment.
-
# hledger-web
A Haskell app that uses [Miso](https://hackage.haskell.org/package/miso) and [hledger's Haskell libraries](https://hledger.org/) plus [ghcjs](https://github.com/ghcjs/ghcjs) to be compiled to a web page, and then adds [optional remoteStorage](https://remotestorage.io/) so you can store your ledger data somewhere else.
This was my introduction to Haskell and also built at a time I thought remoteStorage was a good idea that solved many problems, and that it could use some help in the form of just yet another somewhat-useless-but-cool project using it that could be [added to their wiki](https://wiki.remotestorage.io/Apps).
- <https://hledger.alhur.es/>
- <https://github.com/fiatjaf/hledger-web>
## See also
- [My stupid introduction to Haskell](nostr:naddr1qqyrxveevscrqcmrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cxd5qyk)
- [LessPass remoteStorage](nostr:naddr1qqyrsctpxfjnqepeqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cfa6z2z)
- [TiddlyWiki remoteStorage](nostr:naddr1qqyxxve4x33nqerrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cat32d3)
-
# Carl R. Rogers sobre a ciência
> Creio que o objetivo primário da ciência é fornecer uma hipótese, uma convicção e uma fé mais seguras e que satisfaçam melhor o próprio investigador. Na medida em que o cientista procura provar qualquer coisa a alguém -- um erro em que incorri mais de uma vez --, creio que ele está se servindo da ciência para remediar uma insegurança pessoal, desviando-a do seu verdadeiro papel criativo a serviço do indivíduo.
_Tornar-se Pessoa_, página aleatória
-
# contratos.alhur.es
A website that allowed people to fill a form and get a standard _Contrato de Locação_.
Better than all the other "templates" that float around the internet, which are badly formatted `.doc` files.
It was fully programmable so other templates could be added later, but I never did.
This website made maybe one dollar in Google Ads (and Google has probably stolen these like so many other dollars they did with their bizarre requirements).
- <https://github.com/fiatjaf/contratos>
- <http://contratos.alhur.es>
-
# comentário pertinente de Olavo de Carvalho sobre atribuições indevidas de acontecimentos à "ordem espontânea"
Eis aqui um exemplo entre outros mil, extraído das minhas apostilas de aulas, de como se analisam as relações entre fatores deliberados e casuais na ação histórica. O sr, Beltrão está INFINITAMENTE ABAIXO da possibilidade de discutir essas coisas, e por isso mesmo me atribui uma simploriedade que é dele próprio e não minha:
Já citei mil vezes este parágrafo de Georg Jellinek e vou citá-lo de novo: “Os fenômenos da vida social dividem-se em duas classes: aqueles que são determinados essencialmente por uma vontade diretriz e aqueles que existem ou podem existir sem uma organização devida a atos de vontade. Os primeiros estão submetidos necessariamente a um plano, a uma ordem emanada de uma vontade consciente, em oposição aos segundos, cuja ordenação repousa em forças bem diferentes.”
Essa distinção é crucial para os historiadores e os analistas estratégicos não porque ela é clara em todos os casos, mas precisamente porque não o é. O erro mais comum nessa ordem de estudos reside em atribuir a uma intenção consciente aquilo que resulta de uma descontrolada e às vezes incontrolável combinação de forças, ou, inversamente, em não conseguir enxergar, por trás de uma constelação aparentemente fortuita de circunstâncias, a inteligência que planejou e dirigiu sutilmente o curso dos acontecimentos.
Exemplo do primeiro erro são os Protocolos dos Sábios de Sião, que enxergam por trás de praticamente tudo o que acontece de mau no mundo a premeditação maligna de um número reduzidos de pessoas, uma elite judaica reunida secretamente em algum lugar incerto e não sabido.
O que torna essa fantasia especialmente convincente, decorrido algum tempo da sua publicação, é que alguns dos acontecimentos ali previstos se realizam bem diante dos nossos olhos. O leitor apressado vê nisso uma confirmação, saltando imprudentemente da observação do fato à imputação da autoria. Sim, algumas das idéias anunciadas nos Protocolos foram realizadas, mas não por uma elite distintamente judaica nem muito menos em proveito dos judeus, cuja papel na maioria dos casos consistiu eminentemente em pagar o pato. Muitos grupos ricos e poderosos têm ambições de dominação global e, uma vez publicado o livro, que em certos trechos tem lances de autêntica genialidade estratégica de tipo maquiavélico, era praticamente impossível que nada aprendessem com ele e não tentassem por em prática alguns dos seus esquemas, com a vantagem adicional de que estes já vinham com um bode expiatório pré-fabricado. Também é impossível que no meio ou no topo desses grupos não exista nenhum judeu de origem. Basta portanto um pouquinho de seletividade deformante para trocar a causa pelo efeito e o inocente pelo culpado.
Mas o erro mais comum hoje em dia não é esse. É o contrário: é a recusa obstinada de enxergar alguma premeditação, alguma autoria, mesmo por trás de acontecimentos notavelmente convergentes que, sem isso, teriam de ser explicados pela forca mágica das coincidências, pela ação de anjos e demônios, pela "mão invisível" das forças de mercado ou por hipotéticas “leis da História” ou “constantes sociológicas” jamais provadas, que na imaginação do observador dirigem tudo anonimamente e sem intervenção humana.
As causas geradoras desse erro são, grosso modo:
Primeira: Reduzir as ações humanas a efeitos de forças impessoais e anônimas requer o uso de conceitos genéricos abstratos que dão automaticamente a esse tipo de abordagem a aparência de coisa muito científica. Muito mais científica, para o observador leigo, do que a paciente e meticulosa reconstituição histórica das cadeias de fatos que, sob um véu de confusão, remontam às vezes a uma autoria inicial discreta e quase imperceptível. Como o estudo dos fenômenos histórico-políticos é cada vez mais uma ocupação acadêmica cujo sucesso depende de verbas, patrocínios, respaldo na mídia popular e boas relações com o establishment, é quase inevitável que, diante de uma questão dessa ordem, poucos resistam à tentação de matar logo o problema com duas ou três generalizações elegantes e brilhar como sábios de ocasião em vez de dar-se o trabalho de rastreamentos históricos que podem exigir décadas de pesquisa.
Segunda: Qualquer grupo ou entidade que se aventure a ações histórico-políticas de longo prazo tem de possuir não só os meios de empreendê-las, mas também, necessariamente, os meios de controlar a sua repercussão pública, acentuando o que lhe convém e encobrindo o que possa abortar os resultados pretendidos. Isso implica intervenções vastas, profundas e duradouras no ambiente mental. [Etc. etc. etc.]
(no facebook em 17 de julho de 2013)
-
# Reasons why Lightning is not that great
Some Bitcoiners, me included, were fooled by hyperbolic discourse that presented Lightning as some magical scaling solution with no flaws. This is an attempt to list some of the actual flaws uncovered after 5 years of experience. The point of this article is not to say Lightning is a complete worthless piece of crap, but only to highlight the fact that Bitcoin needs to put more focus on developing and thinking about other scaling solutions (such as [Drivechain](nostr:naddr1qq9xgunfwejkx6rpd9hqzythwden5te0ve5kzar2v9nzucm0d5pzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vaqvzqqqr4gumtjfnp), less crappy and more decentralized trusted channels networks and [statechains](https://bitcoinmagazine.com/technical/statechains-sending-keys-not-coins-to-scale-bitcoin-off-chain)).
## Unbearable experience
Maintaining a node is cumbersome, you have to deal with closed channels, allocating funds, paying fees unpredictably, choosing new channels to open, storing channel state backups -- or you'll have to delegate all these decisions to some weird AI or third-party services, it's not feasible for normal people.
## Channels fail for no good reason all the time
Every time nodes disagree on anything they close channels, there have been dozens, maybe hundreds, of bugs that lead to channels being closed in the past, and implementors have been fixing these bugs, but since these node implementations continue to be worked on and new features continue to be added we can be quite sure that new bugs continue to be introduced.
## Trimmed (fake) HTLCs are not sound protocol design
What would you tell me if I presented a protocol that allowed for transfers of users' funds across a network of channels and that these channels would pledge to send the money to miners while the payment was in flight, and that these payments could never be recovered if a node in the middle of the hop had a bug or decided to stop responding? Or that the receiver could receive your payment, but still claim he didn't, and you couldn't prove that at all?
These are the properties of "trimmed HTLCs", HTLCs that are uneconomical to have their own UTXO in the channel presigned transaction bundles, therefore are just assumed to be there while they are not (and their amounts are instead added to the fees of the presigned transaction).
Trimmed HTLCs, like any other HTLC, have timelocks, preimages and hashes associated with them -- which are properties relevant to the redemption of actual HTLCs onchain --, but unlike actual HTLCs these things have no actual onchain meaning since there is no onchain UTXO associated with them. This is a game of make-believe that only "works" because (1) payment proofs aren't worth anything anyway, so it makes no sense to steal these; (2) channels are too expensive to setup; (3) all Lightning Network users are honest; (4) there are so many bugs and confusion in a Lightning Network node's life that events related to trimmed HTLCs do not get noticed by users.
Also, so far these trimmed HTLCs have only been used for very small payments (although very small payments probably account for 99% of the total payments), so it is supposedly "fine" to have them. But, as fees rise, more and more HTLCs tend to become fake, which may make people question the sanity of the design.
Tadge Dryja, one of the creators of the Lightning Network proposal, has been critical of the fact that these things were allowed to creep into the BOLT protocol.
## Routing
Routing is already very bad today even though most nodes have a basically 100% view of the public network, the reasons being that some nodes are offline, others are on Tor and unreachable or too slow, channels have the balance shifted in the wrong direction, so payments fail a lot -- which leads to the (bad) solution invented by professional node runners and large businesses of probing the network constantly in order to discard bad paths, this creates unnecessary load and increases the risk of channels being dropped for no good reason.
As the network grows -- if it indeed grow and not centralize in a few hubs -- routing tends to become harder and harder.
While each implementation team makes their own decisions with regard to how to best way to route payments and these decisions may change at anytime, it's worth noting, for example, that CLN will use MPP to split up any payment in any number of chunks of 10k satoshis, supposedly to improve routing success rates. While this often backfires and causes payments to fail when they should have succeeded, it also contributes to making it so there are proportionally more fake HTLCs than there should be, as long as the threshold for fake HTLCs is above 10k.
## Payment proofs are somewhat useless
Even though payment proofs were seen by many (including me) as one of the great things about Lightning, the sad fact is that they do not work as proofs if people are not aware of the fact that they are proofs. Wallets do all they can to hide these details from users because it is considered "bad UX" and low-level implementors do not care very much to talk about them at all. There have been attempts from Lightning Labs to get rid of the payment proofs entirely (which at the time to me sounded like a terrible idea, but now I realize they were not wrong).
Here's a piece of anecdote: I've personally witnessed multiple episodes in which Phoenix wallet released the preimage without having actually received the payment (they did receive a minor part of the payment, but the payment was split in many parts). That caused my service, _@lntxbot_, to mark the outgoing payment as complete, only then to have to endure complaints from the users because the receiver side, Phoenix, had not received the full amount. In these cases, if the protocol and the idea of preimages as payment proofs be respected, should I have been the one in charge of manually fixing user balances?
Another important detail: when an HTLC is sent and then something goes wrong with the payment the channel has to be closed in order to redeem that payment. When the redeemer is on the receiver side, the very act of redeeming should cause the preimage to be revealed and a proof of payment to be made available for the sender, who can then send that back to the previous hop and the payment is proven without any doubt. But when this happens for fake HTLCs (which is the vast majority of payments, as noted above) there is no place in the world for a preimage and therefore there are no proofs available. A channel is just closed, the payer loses money but can't prove a payment. It also can't send that proof back to the previous hop so he is forced to say the payment failed -- even if it wasn't him the one who declared that hop a failure and closed the channel, which should be a prerequisite. I wonder if this isn't the source of multiple bugs in implementations that cause channels to be closed unnecessarily. The point is: preimages and payment proofs are mostly a fiction.
Another important fact is that the proofs do not really prove anything if the keypair that signs the invoice can't be provably attached to a real world entity.
## LSP-centric design
The first Lightning wallets to show up in the market, LND as a desktop daemon (then later with some GUIs on top of it like Zap and Joule) and Anton's BLW and Eclair wallets for mobile devices, then later LND-based mobile wallets like Blixt and RawTX, were all standalone wallets that were self-sufficient and meant to be run directly by consumers. Eventually, though, came Breez and Phoenix and introduced the "LSP" model, in which a server would be trusted in various forms -- not directly with users' funds, but with their privacy, fees and other details -- but most importantly that LSP would be the primary source of channels for all users of that given wallet software. This was all fine, but as time passed new features were designed and implemented that assumed users would be running software connected to LSPs. The very idea of a user having a standalone mobile wallet was put out of question. The entire argument for implementation of the bolt12 standard, for example, hinged on the assumption that mobile wallets would have [LSPs capable of connecting to Google messaging services and being able to "wake up" mobile wallets](https://twitter.com/hampus_s/status/1442493786110705668) in order for them to receive payments. Other ideas, like a complicated standard for allowing mobile wallets to receive payments without having to be online all the time, just [assume LSPs always exist](https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003307.html); and changes to the expected BOLT spec behavior with regards to, for example, [probing of mobile wallets](https://github.com/lightningnetwork/lnd/pull/4785).
Ark is another example of a kind of LSP that got so enshrined that it become a new protocol that depends on it entirely.
## Protocol complexity
Even though the general idea of how Lightning is supposed to work can be understood by many people (as long as these people know how Bitcoin works) the Lightning protocol is not really easy: it will take a long time of big dedication for anyone to understand the details about the BOLTs -- this is a bad thing if we want a world of users that have at least an idea of what they are doing. Moreover, with each new cool idea someone has that gets adopted by the protocol leaders, it increases in complexity and some of the implementors are kicked out of the circle, therefore making it easier for the remaining ones to proceed with more and more complexity. It's the same process by which Chrome won the browser wars, kicked out all competitors and proceeded to make a supposedly open protocol, but one that no one can implement as it gets new and more complex features every day, all envisioned by the Chrome team.
## Liquidity issues?
I don't believe these are a real problem if all the other things worked, but still the old criticism that Lightning requires parking liquidity and that has a cost is not a complete non-issue, specially given the LSP-centric model.
-
# idea: Custom multi-use database app
Since 2015 I have this idea of making one app that could be repurposed into a full-fledged app for all kinds of uses, like powering small businesses accounts and so on. Hackable and open as an Excel file, but more efficient, without the hassle of making tables and also using ids and indexes under the hood so different kinds of things can be related together in various ways.
It is not a concrete thing, just a generic idea that has taken multiple forms along the years and may take others in the future. I've made quite a few attempts at implementing it, but never finished any.
I used to refer to it as a "multidimensional spreadsheet".
Can also be related to [DabbleDB][dabble-db].
[dabble-db]: <https://en.wikipedia.org/wiki/Dabble_DB>
-
# On HTLCs and arbiters
This is another attempt and conveying the same information that should be in [Lightning and its fake HTLCs](nostr:naddr1qqyryefsxqcxgdmzqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cp0m63a). It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
1. Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
2. Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
3. MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
## Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
## Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
# Multi-service Graph Reputation protocol
## The problem
1. Users inside centralized services need to know reputations of other users they're interacting with;
2. Building reputation with ratings imposes a big burden on the user and still accomplishes nothing, can be faked, no one cares about these ratings etc.
## The ideal solution
Subjective reputation: reputation based on how you rated that person previously, and how other people you trust rated that person, and how other people trusted by people you trust rated that person and so on, in a web-of-trust that actually can give you some insight on the trustworthiness of someone you never met or interacted with.
## The problem with the ideal solution
1. Most of the times the service that wants to implement this is not as big as Facebook, so it won't have enough people in it for such graphs of reputation to be constructed.
2. It is not trivial to build.
## My proposed solution:
I've drafted a protocol for an open system based on services publishing their internal reputation records and indexers using these to build graphs, and then serving the graphs back to the services so they can show them to users when it is needed (as HTTP APIs that can be called directly from the user client app or browser).
Crucially, these indexers will gather data from multiple services and cross-link users from these services so the graph is better.
<https://github.com/fiatjaf/multi-service-reputation-rfc>
The first and single actionable and useful feedback I got, from [@bootstrapbandit](https://twitter.com/bootstrapbandit) was that services shouldn't share email addresses in plain text (email addresses and other external relationships users of a service may have are necessary to establish links from users accross services), but I think it is ok if services publish hashes of these email addresses instead. At some point I will update the spec draft and that may have been before the time you're reading this.
Another issue is that services may lie about their reputation records and that will hurt other services and users in these other services that are relying on that data. Maybe indexers will have to do some investigative job here to assert service honesty. Or maybe this entire protocol is just failed and we will actually need a system in which users themselves will publish their own records.
## See also
* [P2P reputation thing](nostr:naddr1qqyrqv3cxumnydfsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cnjc88q)
* [idea: Graph subjective reputation as a service](nostr:naddr1qqyrjdehxymrsdpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cal60d8)
* <https://github.com/jangerritharms/reputation_systems>
-
# Idéia de um sistema jurídico centralizado, mas com um pouco de lógica
um processo, é, essencialmente, imagino eu na minha ingenuidade leiga, um apelo que se faz ao juiz para que este reconheça certos fatos como probantes de um certo fenômeno tipificado por uma certa lei.
imagino então o seguinte:
uma petição não é mais um enorme documento escrito numa linguagem nojenta com referências a leis e a evidências factuais espalhadas segundo a (in) capacidade ensaística do advogado, mas apenas um esquema lógico - talvez até um diagrama desenhado (ou talvez quem sabe uma série de instruções compreensíveis por um computador?) - mostrando a ligação entre a lei e os fatos e os pedidos, por exemplo:
1. a lei tal diz que ninguém pode vender
2. fulano vendeu cigarros
3. é prova de que fulano vendeu cigarros ia foto tirada na rua tal no dia tal que mostra fulano vendendo cigarros
4. a mesma lei pede que fulano pague uma multa
este exemplo está ainda muito verborrágico, mas é só um exemplo simples. coisas mais complicadas precisariam de outras formas de expressão caso queiramos evitar as longas dissertações jurídicas em voga.
a idéia é que o esquema acima vale por si. um proto-juiz pode julgá-lo como válido ou inválido apenas pela sua lógica interna.
a outra parte do julgamento seria a ligação desse esquema com a realidade externa: anexados à petição viriam as evidências. no caso, anexada ao ponto 3 viria uma foto do fulano. ao ponto 1 também precisa ser anexado o texto da lei referida, mas isto pode ser feito automaticamente pelo número da lei.
uma vez que tenhamos um esquema lógico válido um outro proto-juiz, ou vários outros, pode julgar individualmente cada evidência: ver se o texto da lei confere com a interpretação feita no ponto 1, e se a foto anexada ao ponto 3 é mesmo a foto do réu vendendo cigarro e não a de um urso comendo laranjas.
cada um desses julgamentos pode ser feito sem que o proto-juiz tenha conhecimento do resto das coisas do processo: o primeiro proto-juiz não precisa ver a foto ou a lei, o segundo não precisa ver o esquema lógico ou a foto, o terceiro não precisa ver a lei nem o esquema lógico, e mesmo assim teríamos um julgamento de procedência ou não da petição ao final, o mais impessoal e provavelmente o mais justo possível.
a defesa consistiria em apontar erros no esquema lógico ou falhas no nexo entre a realidade é o esquema. por exemplo:
3. uma foto assim não é uma prova de que fulano vendeu, ele podia estar só passando lá perto.
* ele estava de fato só passando lá perto. do que é prova este documento mostrando seu comparecimento a uma aula do curso de direito da UFMG no mesmo horário.
---
perdoem-me se estiver falando besteira, mas são 5h e estou ainda dormindo. obviamente há vários pontos problemáticos aí, e quero entendê-los, mas a forma geral me parece bem razoável.
o que descrevi acima é uma proposta, digamos, de sistema jurídico que não se diferencia em nada do nosso sistema jurídico atual, exceto na forma (não no sentido escolástico). é também uma tentativa de compreender sua essência.
as vantagens desse formato ao atual são muitas:
- menos papel, coisas pra ler, repetição infinita de citações legais e longuíssimas dissertações escritas por advogados analfabetos que destroem a língua e a inteligência de todos
- diminuição drástica do tempo gasto por cada juiz em cada processo
- diminuição do poder de cada juiz (se cada ato de julgamento humano necessário em cada processo pode ser feito por qualquer juiz, sem conhecimento dos outros aspectos do mesmo processo, tudo é muito mais rápido, e cada julgamento desses pode ser feito por vários juízes diferentes, escolhidos aleatoriamente)
- diminuição da pomposidade de casa juiz: com menos poder e obrigações maus simples, um juiz não precisa ser mais uma pessoa especial que ganha milhões, pode ser uma pessoa comum, um proto-juiz, ganhando menos (o que possibilitaria até ter mais desses e aumentar a confiabilidade de cada julgamento)
- os juízes podem trabalhar da casa deles e a qualquer momento
- passa a ter sentido a existência de um sistema digital de processos (porque é ridículo que o sistema digital atual seja só uma forma de passar documentos do Word de um lado para o outro)
- o fim das audiências de conciliação, que são uma monstruosidade criada apenas pela necessidade de diminuir a quantidade de processos em tramitação e acabam retirandobo sentido da justiça (as partes são levemente pressionadas a ignorar a validade ou não das suas posições e fazer um acordo, sob pena de o juiz ficar com raiva delas depois)
milhares de precauções devem ser tomadas caso um sistema desses vá ser implantado (ahahah), talvez manter uma forma de julgamento tradicional, de corpo presente e com um juiz ou júri que tem conhecimento de toda situação, mas apenas para processos que chegarem até certo ponto, e assim por diante.
## Ver também
* [P2P reputation thing](nostr:naddr1qqyrqv3cxumnydfsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cnjc88q) para um fundamento de um sistema jurídico anárquico.
-
# Why I don't like NIP-26 as a solution for key management
NIP-26 was created out of the needs of the Nostr integration at https://minds.com/. They wanted Minds users to be able to associate their "custodial" Nostr key with an external self-owned key. [NIP-26](https://github.com/nostr-protocol/nips/blob/master/26.md) looked like a nice fit for the job, because it would allow supporting clients to associate the two identities _statelessly_ (i.e. by just seeing one event published by Minds but with a delegation tag on it the client would be able to associate that with the self-owned external key without anything else[^1]).
The big selling point of NIP-26 (to me) was that it was fully _optional_. Clients were free to not implement it and they would not suffer much. They would just see "bob@minds.com" published this, and "bob-self-owned" published that. They would probably know intuitively that these two were the same person, or not, but it wouldn't be an issue. Both would still be identified as Bob and have a picture, a history and so on. Moreover, this wasn't expected to happen a lot, it would be mostly for the small intersection of people that wanted to have their own keys and also happened to be using one of these "custodial Nostr" platforms like Minds.
At some point, though, NIP-26 started to be seen as _the solution for key management_ on Nostr. The idea is that someone will generate a very safe key on a hardware device and guard it as their most precious treasure without it ever touching the internet, and use it just to sign delegation tags. Then use multiple of these delegation tags, one for each different Nostr app, and maybe rotate them every month or so, details are unclear.
This breaks the previous expectations I had for NIP-26 entirely, as now these keys become faceless entities that can't be associated with anything _except their "master" key_ (the one that is in cold storage). So in a world in which most Nostr users are using NIP-26 for everything, clients that do not implement NIP-26 become completely useless, as all they will see is a constant stream of random keys. They won't be able to follow anyone or interact with anyone, as these keys will not identify any concrete person on their back, they will vanish all the time and new keys will show up and the world will be chaotic. So now every client must implement NIP-26 to become usable at all, it is not _optional_ anymore.
You may argue that making NIP-26 a de facto mandatory NIP isn't a bad thing and is worth the cost, but I think it breaks a lot of the simplicity of the protocol. It would probably be worth the cost if we knew NIP-26 was an actual complete solution, but it definitely is not, it is partial, and not the most elegant thing in the world. I think key management can be solved in multiple different ways that can all work together or not, but most importantly they can all remain optional.
More thoughts on these multiple ways can be found at [Thoughts on Nostr key management](nostr:naddr1qqyrwvnxx4jrzef5qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cchlq3c).
If I am wrong about all this and we really come to the conclusion that we need a _de facto mandatory **key delegation**_ method for Nostr, so be it -- but in that case, considering that we will break backwards-compatibility anyway, I think there might be a better design than NIP-26, more optimized and easier to implement, I don't know how exactly. But I really think we shouldn't rush that.
[^1]: as opposed to other suggestions that would also work, but that would require dealing with multiple events -- for example, the external user could publish a new replaceable event -- or use `kind:0` -- to say they wanted to grandfather the Minds key into their umbrella, while the Minds key would also need to signal its acceptance of that. This also had the problem of requiring changes every time a new replaceable event of such kind was found. Although I am unsure now, at the time me and William agreed this was worse than NIP-26 with the delegation tag.
-
# LessPass remoteStorage
[LessPass](https://www.lesspass.com/) is a nice idea: a password manager without any state. Just remember one master password and you can generate a different one for every site using the power of hashes.
But it has a very bad issue: some sites require just numbers, others have a minimum or maximum character limits, some require non-letter characters, uppercase characters, others forbid these and so on.
The solution: to allow you to specify parameters when generating the password so you can fit a generated password on every service.
The problem with the solution: it creates state. Now you must remember what parameters you used when generating a password for each site.
This was a way to store these settings on a [remoteStorage](https://wiki.remotestorage.io/Apps) bucket. Since it isn't confidential information in any way, that wasn't a problem, and I thought it was a good fit for remoteStorage.
Some time later I realized it maybe would be better to have a centralized repository hosting all weird requirements for passwords each domain forced on its users, and let LessPass use data from that central place when generating a password. Still stateful, not ideal, not very far from a centralized password manager, but still requiring less trust and less cryptographic assumptions.
- <https://github.com/fiatjaf/lesspass-remotestorage>
- <https://addons.mozilla.org/firefox/addon/lesspass-remotestorage/>
- <https://chrome.google.com/webstore/detail/lesspass-remotestorage/aogdpopejodechblppdkpiimchbmdcmc>
- <https://lesspass.alhur.es/>
## See also
- [hledger-web](nostr:naddr1qqyrsefkvvck2efkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cffvz7c)
- [TiddlyWiki remoteStorage](nostr:naddr1qqyxxve4x33nqerrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cat32d3)
-
# Classless Templates
There are way too many hours being wasted in making themes for blogs. And then comes a new blog framework, it requires new themes. Old themes can't be used because they relied on different ways of rendering the website. Everything is a mess.
Classless was an attempt at solving it. It probably didn't work because I wasn't the best person to make themes and showcase the thing.
Basically everybody would agree on a simple HTML template that could fit blogs and simple websites very easily. Then other people would make pure-CSS themes expecting that template to be in place.
No classes were needed, only a fixed structure of `header`. `main`, `article` etc.
With **flexbox** and **grid** CSS was enough to make this happen.
The templates that were available were all ported by me from other templates I saw on the web, and there was a simple one I created for my old website.
- <https://github.com/fiatjaf/classless>
- <https://classless.alhur.es/>
- <https://classless.alhur.es/themes/>
-
# doulas.club
A full catalog of all Brazilian doulas with data carefully scrapped from many websites that contained partial catalogs and some data manually included. All this packaged as a _Couchapp_ and served directly from **Cloudant**.
This was done because the idea of doulas was good, but I spotted an issue: pregnant womwn should know many doulas before choosing one that would match well, therefore a full catalog with a lot of information was necessary.
This was a huge amount of work mostly wasted.
Many doulas who knew about this didn't like it and sent angry and offensive emails telling me to remove them. This was information one should know before choosing a doula.
### See also
- [About CouchDB](nostr:naddr1qqyrwepevf3n2wf5qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c0jq39e)
-
# jiq
When someone created [`jiq`](https://github.com/simeji/jid) claiming it had "jq queries" I went to inspect and realized it didn't, it just had a poor simple JSON query language that implemented 1% of all [`jq`](https://stedolan.github.io/jq/manual/) features, so I forked it and plugged `jq` directly into it, and renamed to `jiq`.
After some comments on issues in the original repository from people complaining about lack of `jq` compatibility it got a ton of unexpected users, was even packaged to ArchLinux.
![](https://raw.githubusercontent.com/fiatjaf/jiq/master/screencast.gif)
- <https://github.com/fiatjaf/jiq>