-
The following script try, using [nak](https://github.com/fiatjaf/nak), to find out the last ten people who have followed a `target_pubkey`, sorted by the most recent. It's possibile to shorten `search_timerange` to speed up the search.
```
#!/usr/bin/env fish
# Target pubkey we're looking for in the tags
set target_pubkey "6e468422dfb74a5738702a8823b9b28168abab8655faacb6853cd0ee15deee93"
set current_time (date +%s)
set search_timerange (math $current_time - 600) # 24 hours = 86400 seconds
set pubkeys (nak req --kind 3 -s $search_timerange wss://relay.damus.io/ wss://nos.lol/ 2>/dev/null | \
jq -r --arg target "$target_pubkey" '
select(. != null and type == "object" and has("tags")) |
select(.tags[] | select(.[0] == "p" and .[1] == $target)) |
.pubkey
' | sort -u)
if test -z "$pubkeys"
exit 1
end
set all_events ""
set extended_search_timerange (math $current_time - 31536000) # One year
for pubkey in $pubkeys
echo "Checking $pubkey"
set events (nak req --author $pubkey -l 5 -k 3 -s $extended_search_timerange wss://relay.damus.io wss://nos.lol 2>/dev/null | \
jq -c --arg target "$target_pubkey" '
select(. != null and type == "object" and has("tags")) |
select(.tags[][] == $target)
' 2>/dev/null)
set count (echo "$events" | jq -s 'length')
if test "$count" -eq 1
set all_events $all_events $events
end
end
if test -n "$all_events"
echo -e "Last people following $target_pubkey:"
echo -e ""
set sorted_events (printf "%s\n" $all_events | jq -r -s '
unique_by(.id) |
sort_by(-.created_at) |
.[] | @json
')
for event in $sorted_events
set npub (echo $event | jq -r '.pubkey' | nak encode npub)
set created_at (echo $event | jq -r '.created_at')
if test (uname) = "Darwin"
set follow_date (date -r "$created_at" "+%Y-%m-%d %H:%M")
else
set follow_date (date -d @"$created_at" "+%Y-%m-%d %H:%M")
end
echo "$follow_date - $npub"
end
end
```
-
This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
## The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
## Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
## Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
## The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
## Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
# Instructions
1. Place 2 medium-sized, boiled potatoes and a handful of sliced leeks in a pot.
![veggies](https://i.nostr.build/4uRAuNZOXAa8nyh3.jpg)
2. Fill the pot with water or vegetable broth, to cover the potatoes twice over.
3. Add a splash of white wine, if you like, and some bouillon powder, if you went with water instead of broth.
![wine](https://i.nostr.build/a5aJuPvqNT86b9od.jpg)
4. Bring the soup to a boil and then simmer for 15 minutes.
5. Puree the soup, in the pot, with a hand mixer. It shouldn't be completely smooth, when you're done, but rather have small bits and pieces of the veggies floating around.
![puree](https://i.nostr.build/mx3gApObWecomJ5O.jpg)
6. Bring the soup to a boil, again, and stir in one container (200-250 mL) of heavy cream.
7. Thicken the soup, as needed, and then simmer for 5 more minutes.
8. Garnish with croutons and veggies (here I used sliced green onions and radishes) and serve.
![soup](https://i.nostr.build/2WOAeFnriTH1gvFQ.jpg)
Guten Appetit!
-
New Year’s resolutions often feel boring and repetitive. Most revolve around getting in shape, eating healthier, or giving up alcohol. While the idea is interesting—using the start of a new calendar year as a catalyst for change—it also seems unnecessary. Why wait for a specific date to make a change? If you want to improve something in your life, you can just do it. You don’t need an excuse.
That’s why I’ve never been drawn to the idea of making a list of resolutions. If I wanted a change, I’d make it happen, without worrying about the calendar. At least, that’s how I felt until now—when, for once, the timing actually gave me a real reason to embrace the idea of New Year’s resolutions.
Enter [Olas](https://olas.app).
If you're a visual creator, you've likely experienced the relentless grind of building a following on platforms like Instagram—endless doomscrolling, ever-changing algorithms, and the constant pressure to stay relevant. But what if there was a better way? Olas is a Nostr-powered alternative to Instagram that prioritizes community, creativity, and value-for-value exchanges. It's a game changer.
Instagram’s failings are well-known. Its algorithm often dictates whose content gets seen, leaving creators frustrated and powerless. Monetization hurdles further alienate creators who are forced to meet arbitrary follower thresholds before earning anything. Additionally, the platform’s design fosters endless comparisons and exposure to negativity, which can take a significant toll on mental health.
Instagram’s algorithms are notorious for keeping users hooked, often at the cost of their mental health. I've spoken about this extensively, most recently at Nostr Valley, explaining how legacy social media is bad for you. You might find yourself scrolling through content that leaves you feeling anxious or drained. Olas takes a fresh approach, replacing "doomscrolling" with "bloomscrolling." This is a common theme across the Nostr ecosystem. The lack of addictive rage algorithms allows the focus to shift to uplifting, positive content that inspires rather than exhausts.
Monetization is another area where Olas will set itself apart. On Instagram, creators face arbitrary barriers to earning—needing thousands of followers and adhering to restrictive platform rules. Olas eliminates these hurdles by leveraging the Nostr protocol, enabling creators to earn directly through value-for-value exchanges. Fans can support their favorite artists instantly, with no delays or approvals required. The plan is to enable a brand new Olas account that can get paid instantly, with zero followers - that's wild.
Olas addresses these issues head-on. Operating on the open Nostr protocol, it removes centralized control over one's content’s reach or one's ability to monetize. With transparent, configurable algorithms, and a community that thrives on mutual support, Olas creates an environment where creators can grow and succeed without unnecessary barriers.
Join me on my New Year's resolution. Join me on Olas and take part in the [#Olas365](https://olas.app/search/olas365) challenge! It’s a simple yet exciting way to share your content. The challenge is straightforward: post at least one photo per day on Olas (though you’re welcome to share more!).
[Download on iOS](https://testflight.apple.com/join/2FMVX2yM).
[Download on Android](https://github.com/pablof7z/olas/releases/) or download via Zapstore.
Let's make waves together.
-
## The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
## Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
* **Enhance LLM Outputs:** By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
* **Support Complex Queries:** Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
## Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
* **Richness of Entities:** Effective retrieval depends on graphs populated with rich, detailed entities.
* **Freedom from Hallucinations:** Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
## Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
* Handle diverse data sources with ease.
* Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
## Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
* **Defining Relationships:** Clarifying how concepts connect within a domain.
* **Validating Graph Structures:** Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
## How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
1. **Assign a Knowledge Champion:** Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
2. **Provide Training:** Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
3. **Leverage Partnerships:** Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
4. **Utilize Tools:** Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
5. **Iterate with Feedback:** Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
## How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
1. **Leverage Existing Ontologies:** Many industries and domains already have open ontologies. For instance:
* **Public Knowledge Graphs:** Resources like Wikipedia’s graph offer a wealth of structured knowledge.
* **Industry Standards:** Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
* **Business Framework Ontology (BFO):** A valuable resource for enterprises looking to define business processes and structures.
1. **Build In-House Expertise:** If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
2. **Utilize LLMs for Ontology Construction:** Interestingly, LLMs themselves can act as a starting point for ontology development:
* **Prompt-Based Extraction:** LLMs can generate draft ontologies by leveraging their extensive training on graph data.
* **Domain Expert Refinement:** Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
## Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
* **Detecting Hallucinations:** Differentiating between genuine insights and AI-generated inaccuracies.
* **Ensuring Completeness:** Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
## LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
* **Generate Skeleton Ontologies:** Prompt LLMs with domain-specific information to draft initial ontology structures.
* **Validate and Refine Ontologies:** Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
## Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
## The Four-Layer Framework
### Layer 1: Zoom Out
![](http://hedgedoc.malin.onl/uploads/bf583a95-79b0-4efe-a194-d6a8b80d6f8a.png)
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
**Example**: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- **Tip**: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
_Once you have the big picture in mind, it’s time to start breaking it down._
---
### Layer 2: Categorize and Connect
![](http://hedgedoc.malin.onl/uploads/5c413063-fddd-48f9-a65b-2cd374340613.png)
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
**Example**: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- **Tip**: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
_With your categories in place, you’re ready to dive into the details that bring them to life._
---
### Layer 3: Master the Details
![](http://hedgedoc.malin.onl/uploads/55ad1e7e-a28a-42f2-8acb-1d3aaadca251.png)
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
**Example**: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- **Tip**: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
_Now that you’ve mastered the essentials, you can expand your knowledge to include extra material._
---
### Layer 4: Expand Your Horizons
![](http://hedgedoc.malin.onl/uploads/7ede6389-b429-454d-b68a-8bae607fc7d7.png)
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
**Example**: Learn about rare programming quirks or historical trivia about a language’s development.
- **Tip**: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
---
## Pro Tips for Better Learning
### 1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
### 2. Map It Out
Create visual aids like [diagrams or concept maps](https://excalidraw.com/) to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
### 3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching **exposes any gaps** in your understanding and **helps reinforce** the material.
### 4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to **explore your topic** in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.
**Tip**: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
---
## Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
Here are my predictions for Nostr in 2025:
**Decentralization:** The outbox and inbox communication models, sometimes referred to as the Gossip model, will become the standard across the ecosystem. By the end of 2025, all major clients will support these models, providing seamless communication and enhanced decentralization. Clients that do not adopt outbox/inbox by then will be regarded as outdated or legacy systems.
**Privacy Standards:** Major clients such as Damus and Primal will move away from NIP-04 DMs, adopting more secure protocol possibilities like NIP-17 or NIP-104. These upgrades will ensure enhanced encryption and metadata protection. Additionally, NIP-104 MLS tools will drive the development of new clients and features, providing users with unprecedented control over the privacy of their communications.
**Interoperability:** Nostr's ecosystem will become even more interconnected. Platforms like the Olas image-sharing service will expand into prominent clients such as Primal, Damus, Coracle, and Snort, alongside existing integrations with Amethyst, Nostur, and Nostrudel. Similarly, audio and video tools like Nostr Nests and Zap.stream will gain seamless integration into major clients, enabling easy participation in live events across the ecosystem.
**Adoption and Migration:** Inspired by early pioneers like Fountain and Orange Pill App, more platforms will adopt Nostr for authentication, login, and social systems. In 2025, a significant migration from a high-profile application platform with hundreds of thousands of users will transpire, doubling Nostr’s daily activity and establishing it as a cornerstone of decentralized technologies.
-
We didn't hear them land on earth, nor did we see them. The spores were not visible to the naked eye. Like dust particles, they softly fell, unhindered, through our atmosphere, covering the earth. It took us a while to realize that something extraordinary was happening on our planet. In most places, the mushrooms didn't grow at all. The conditions weren't right. In some places—mostly rocky places—they grew large enough to be noticeable. People all over the world posted pictures online. "White eggs," they called them. It took a bit until botanists and mycologists took note. Most didn't realize that we were dealing with a species unknown to us.
We aren't sure who sent them. We aren't even sure if there is a "who" behind the spores. But once the first portals opened up, we learned that these mushrooms aren't just a quirk of biology. The portals were small at first—minuscule, even. Like a pinhole camera, we were able to glimpse through, but we couldn't make out much. We were only able to see colors and textures if the conditions were right. We weren't sure what we were looking at.
We still don't understand why some mushrooms open up, and some don't. Most don't. What we do know is that they like colder climates and high elevations. What we also know is that the portals don't stay open for long. Like all mushrooms, the flush only lasts for a week or two. When a portal opens, it looks like the mushroom is eating a hole into itself at first. But the hole grows, and what starts as a shimmer behind a grey film turns into a clear picture as the egg ripens. When conditions are right, portals will remain stable for up to three days. Once the fruit withers, the portal closes, and the mushroom decays.
The eggs grew bigger year over year. And with it, the portals. Soon enough, the portals were big enough to stick your finger through. And that's when things started to get weird...
-
### **English**
#### Introducing YakiHonne: Write Without Limits
YakiHonne is the ultimate text editor designed to help you express yourself creatively, no matter the language.
**Features you'll love:**
- 🌟 **Rich Formatting**: Add headings, bold, italics, and more.
- 🌏 **Multilingual Support**: Seamlessly write in English, Chinese, Arabic, and Japanese.
- 🔗 **Interactive Links**: [Learn more about YakiHonne](https://yakihonne.com).
**Benefits:**
1. Easy to use.
2. Enhance readability with customizable styles.
3. Supports various complex formats including LateX.
> "YakiHonne is a game-changer for content creators."
-
Today I learned how to install [NVapi](https://github.com/sammcj/NVApi) to monitor my GPUs in Home Assistant.
![](https://image.nostr.build/82b86710ef613f285452f4bb6e2a30a16e722db04ec297279c5b476e0c13d9f4.png)
**NVApi** is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- **GPU Utilization Monitoring**: Utilization, memory usage, temperature, fan speed, and power consumption.
- **Automated Power Limiting**: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- **Cross-GPU Coordination**: Total power budget applies across multiple GPUs in the same system.
- **PCIe Link Speed Management**: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- **Home Assistant Integration**: Uses the built-in RESTful platform and template sensors.
## Getting the Data
```
sudo apt install golang-go
git clone https://github.com/sammcj/NVApi.git
cd NVapi
go run main.go -port 9999 -rate 1
curl http://localhost:9999/gpu
```
Response for a single GPU:
```
[
{
"index": 0,
"name": "NVIDIA GeForce RTX 4090",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 16,
"power_limit_watts": 450,
"memory_total_gb": 23.99,
"memory_used_gb": 0.46,
"memory_free_gb": 23.52,
"memory_usage_percent": 2,
"temperature": 38,
"processes": [],
"pcie_link_state": "not managed"
}
]
```
Response for multiple GPUs:
```
[
{
"index": 0,
"name": "NVIDIA GeForce RTX 3090",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 14,
"power_limit_watts": 350,
"memory_total_gb": 24,
"memory_used_gb": 0.43,
"memory_free_gb": 23.57,
"memory_usage_percent": 2,
"temperature": 36,
"processes": [],
"pcie_link_state": "not managed"
},
{
"index": 1,
"name": "NVIDIA RTX A4000",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 10,
"power_limit_watts": 140,
"memory_total_gb": 15.99,
"memory_used_gb": 0.56,
"memory_free_gb": 15.43,
"memory_usage_percent": 3,
"temperature": 41,
"processes": [],
"pcie_link_state": "not managed"
}
]
```
# Start at Boot
Create `/etc/systemd/system/nvapi.service`:
```
[Unit]
Description=Run NVapi
After=network.target
[Service]
Type=simple
Environment="GOPATH=/home/ansible/go"
WorkingDirectory=/home/ansible/NVapi
ExecStart=/usr/bin/go run main.go -port 9999 -rate 1
Restart=always
User=ansible
# Environment="GPU_TEMP_CHECK_INTERVAL=5"
# Environment="GPU_TOTAL_POWER_CAP=400"
# Environment="GPU_0_LOW_TEMP=40"
# Environment="GPU_0_MEDIUM_TEMP=70"
# Environment="GPU_0_LOW_TEMP_LIMIT=135"
# Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
# Environment="GPU_0_HIGH_TEMP_LIMIT=100"
# Environment="GPU_1_LOW_TEMP=45"
# Environment="GPU_1_MEDIUM_TEMP=75"
# Environment="GPU_1_LOW_TEMP_LIMIT=140"
# Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
# Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install]
WantedBy=multi-user.target
```
## Home Assistant
Add to Home Assistant `configuration.yaml` and restart HA (completely).
For a single GPU, this works:
```
sensor:
- platform: rest
name: MYPC GPU Information
resource: http://mypc:9999
method: GET
headers:
Content-Type: application/json
value_template: "{{ value_json[0].index }}"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
scan_interval: 1 # seconds
- platform: template
sensors:
mypc_gpu_0_gpu:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_memory:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_power:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_power_limit:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_temperature:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}"
unit_of_measurement: "°C"
```
For multiple GPUs:
```
rest:
scan_interval: 1
resource: http://mypc:9999
sensor:
- name: "MYPC GPU0 Information"
value_template: "{{ value_json[0].index }}"
json_attributes_path: "$.0"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
- name: "MYPC GPU1 Information"
value_template: "{{ value_json[1].index }}"
json_attributes_path: "$.1"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
- platform: template
sensors:
mypc_gpu_0_gpu:
friendly_name: "MYPC GPU0 GPU"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_memory:
friendly_name: "MYPC GPU0 Memory"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_power:
friendly_name: "MYPC GPU0 Power"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_power_limit:
friendly_name: "MYPC GPU0 Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_temperature:
friendly_name: "MYPC GPU0 Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}"
unit_of_measurement: "C"
- platform: template
sensors:
mypc_gpu_1_gpu:
friendly_name: "MYPC GPU1 GPU"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_1_memory:
friendly_name: "MYPC GPU1 Memory"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_1_power:
friendly_name: "MYPC GPU1 Power"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_1_power_limit:
friendly_name: "MYPC GPU1 Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_1_temperature:
friendly_name: "MYPC GPU1 Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}"
unit_of_measurement: "C"
```
Basic entity card:
```
type: entities
entities:
- entity: sensor.mypc_gpu_0_gpu
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_memory
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_power
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_power_limit
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_temperature
secondary_info: last-updated
```
# Ansible Role
```
---
- name: install go
become: true
package:
name: golang-go
state: present
- name: git clone
git:
repo: "https://github.com/sammcj/NVApi.git"
dest: "/home/ansible/NVapi"
update: yes
force: true
# go run main.go -port 9999 -rate 1
- name: install systemd service
become: true
copy:
src: nvapi.service
dest: /etc/systemd/system/nvapi.service
- name: Reload systemd daemons, enable, and restart nvapi
become: true
systemd:
name: nvapi
daemon_reload: yes
enabled: yes
state: restarted
```
-
Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
### Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
### Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
- **Scoperta.** Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
- **Efficienza.** La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
### Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
### Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
### Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
### Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
### Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
### Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\
https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
![Advent](https://d1csarkz8obe9u.cloudfront.net/posterpreviews/advent-greeting-card-video-wishes-candle-3-design-template-780d47af5b619a8008d7332c59a970d6_screen.jpg)
Christmas season hasn't actually started, yet, in Roman #Catholic Germany. We're in Advent until the evening of the 24th of December, at which point Christmas begins (with the Nativity, at Vespers), and continues on for 40 days until Mariä Lichtmess (Presentation of Christ in the temple) on February 2nd.
![Calendar](https://www.stpatrickchurch.us/portals/0/SiteFiles/LivingTheGospel/LiturgicalCalendar/Liturgical-Calendar.png?ver=2016-07-08-172202-947)
It's 40 days because that's how long the post-partum isolation is, before women were allowed back into the temple (after a ritual cleansing).
![Mariä](https://bistum-augsburg.de/var/plain_site/storage/images/_aliases/lightbox/pfarreien/st.-martin_lauingen/aktuelles/darstellung-des-herrn-mariae-lichtmess_id_0/3691659-1-ger-DE/Darstellung-des-Herrn-Mariae-Lichtmess.jpg)
That is the day when we put away all of the Christmas decorations and bless the candles, for the next year. (Hence, the British name "Candlemas".) It used to also be when household staff would get paid their cash wages and could change employer. And it is the day precisely in the middle of winter.
![](https://setonshrine.org/wp-content/uploads/2019/01/Events.png)
Between Christmas Eve and Candlemas are many celebrations, concluding with the Twelfth Night called Epiphany or Theophany. This is the day some Orthodox celebrate Christ's baptism, so traditions rotate around blessing of waters.
![Diving](https://www.tovima.com/wp-content/uploads/2024/01/05/%CE%B8%CE%B5%CE%BF%CF%86%CE%B1%CE%BD%CE%B5%CE%B9%CE%B1-scaled.jpg)
The Monday after Epiphany was the start of the farming season, in England, so that Sunday all of the ploughs were blessed, but the practice has largely died out.
![Plough](https://bpb-eu-w2.wpmucdn.com/blogs.reading.ac.uk/dist/a/54/files/2016/01/Plough_Monday.jpg)
Our local tradition is for the altar servers to dress as the wise men and go door-to-door, carrying their star and looking for the Baby Jesus, who is rumored to be lying in a manger.
![Stern](https://www.erzbistum-paderborn.de/wp-content/uploads/sites/6/2023/11/STEF0350.jpg)
They collect cash gifts and chocolates, along the way, and leave the generous their powerful blessing, written over the door. The famous 20 * C + M + B * 25 blessing means "Christus mansionem benedicat" (Christ, bless this house), or "Caspar, Melchior, Balthasar" (the names of the three kings), depending upon who you ask.
They offer the cash to the Baby Jesus (once they find him in the church's Nativity scene), but eat the sweets, themselves. It is one of the biggest donation-collections in the world, called the "Sternsinger" (star singers). The money goes from the German children, to help children elsewhere, and they collect around €45 million in cash and coins, every year.
![Groundhog](https://ychef.files.bbci.co.uk/624x351/p0h8c3sc.jpg)
As an interesting aside:
The American "groundhog day", derives from one of the old farmers' sayings about Candlemas, brought over by the Pennsylvania Dutch. It says, that if the badger comes out of his hole and sees his shadow, then it'll remain cold for 4 more weeks. When they moved to the USA, they didn't have any badgers around, so they switched to groundhogs, as they also hibernate in winter.
-
Depuis sa création en 2009, Bitcoin suscite débats passionnés. Pour certains, il s’agit d’une tentative utopique d’émancipation du pouvoir étatique, inspirée par une philosophie libertarienne radicale. Pour d’autres, c'est une réponse pragmatique à un système monétaire mondial jugé défaillant. Ces deux perspectives, souvent opposées, méritent d’être confrontées pour mieux comprendre les enjeux et les limites de cette révolution monétaire.
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/d0bf8568d6f025f4f6ad0700c64230a0bea238a72fc2396a334baa8f4db3e76f/files/1734038200034-YAKIHONNES3.jpg)
### La critique économique de Bitcoin
David Cayla, dans ses dernières contributions, perçoit Bitcoin comme une tentative de réaliser une « utopie libertarienne ». Selon lui, les inventeurs des cryptomonnaies cherchent à créer une monnaie détachée de l’État et des rapports sociaux traditionnels, en s’inspirant de l’école autrichienne d’économie. Cette école, incarnée par des penseurs comme Friedrich Hayek, prône une monnaie privée et indépendante de l’intervention politique.
Cayla reproche à Bitcoin de nier la dimension intrinsèquement sociale et politique de la monnaie. Pour lui, toute monnaie repose sur la confiance collective et la régulation étatique. En rendant la monnaie exogène et dénuée de dette, Bitcoin chercherait à éviter les interactions sociales au profit d’une logique purement individuelle. Selon cette analyse, Bitcoin serait une illusion, incapable de remplacer les monnaies traditionnelles qui sont des « biens publics » soutenus par des institutions.
Cependant, cette critique semble minimiser la légitimité des motivations derrière Bitcoin. La crise financière de 2008, marquée par des abus systémiques et des sauvetages bancaires controversés, a largement érodé la confiance dans les systèmes monétaires centralisés. Bitcoin n’est pas qu’une réaction idéologique : c’est une tentative de créer un système monétaire transparent, immuable et résistant aux manipulations.
### La perspective maximaliste
Pour les bitcoiners, Bitcoin n’est pas une utopie mais une réponse pragmatique à un problème réel : la centralisation excessive et les défaillances des monnaies fiat. Contrairement à ce que Cayla affirme, Bitcoin ne cherche pas à nier la société, mais à protéger les individus de la coercition étatique. En déplaçant la confiance des institutions vers un protocole neutre et transparent, Bitcoin redéfinit les bases des échanges monétaires.
L’accusation selon laquelle Bitcoin serait « asocial » ignore son écosystème communautaire mondial. Les bitcoiners collaborent activement à l’amélioration du réseau, organisent des événements éducatifs et développent des outils inclusifs. Bitcoin n’est pas un rejet des rapports sociaux, mais une tentative de les reconstruire sur des bases plus justes et transparentes.
Un maximaliste insisterait également sur l’émergence historique de la monnaie. Contrairement à l’idée que l’État aurait toujours créé la monnaie, de nombreux exemples montrent que celle-ci émergeait souvent du marché, avant d’être capturée par les gouvernements pour financer des guerres ou imposer des monopoles. Bitcoin réintroduit une monnaie à l’abri des manipulations politiques, similaire aux systèmes basés sur l’or, mais en mieux : numérique, immuable et accessible.
### Utopie contre réalité
Si Cayla critique l’absence de dette dans Bitcoin, Nous y voyons une vertu. La « monnaie-dette » actuelle repose sur un système de création monétaire inflationniste qui favorise les inégalités et l’érosion du pouvoir d’achat. En limitant son offre à 21 millions d’unités, Bitcoin offre une alternative saine et prévisible, loin des politiques monétaires arbitraires.
Cependant, les maximalistes ne nient pas que Bitcoin ait encore des défis à relever. La volatilité de sa valeur, sa complexité technique pour les nouveaux utilisateurs, et les tensions entre réglementation et liberté individuelle sont des questions ouvertes. Mais pour autant, ces épreuves ne remettent pas en cause sa pertinence. Au contraire, elles illustrent l’importance de réfléchir à des modèles alternatifs.
### Conclusion : un débat essentiel
Bitcoin n’est pas une utopie libertarienne, mais une révolution monétaire sans précédent. Les critiques de Cayla, bien qu’intellectuellement stimulantes, manquent de saisir l’essence de Bitcoin : une monnaie qui libère les individus des abus systémiques des États et des institutions centralisées. Loin de simplement avoir un « potentiel », Bitcoin est déjà en train de redéfinir les rapports entre la monnaie, la société et la liberté. Bitcoin offre une expérience unique : celle d’une monnaie mondiale, neutre et décentralisée, qui redonne le pouvoir aux individus. Que l’on soit sceptique ou enthousiaste, il est clair que Bitcoin oblige à repenser les rapports entre monnaie, société et État.
En fin de compte, le débat sur Bitcoin n’est pas seulement une querelle sur sa légitimité, mais une interrogation sur la manière dont il redessine les rapports sociaux et sociétaux autour de la monnaie. En rendant le pouvoir monétaire à chaque individu, Bitcoin propose un modèle où les interactions économiques peuvent être réalisées sans coercition, renforçant ainsi la confiance mutuelle et les communautés globales. Cette discussion, loin d’être close, ne fait que commencer.
-
I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
## MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.
## Graph Basics
- A standard **graph** has a set of **nodes** (or vertices) and **edges** (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
## Hypergraph
- A **hypergraph** extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a **hyperedge**, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
## Metagraph
- A **metagraph** allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
## Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.
![](https://miro.medium.com/v2/resize:fit:1400/1*t2TLvy8pYmmUnLJUUwwvDQ.png)
The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to you
![](https://miro.medium.com/v2/resize:fit:1088/1*agDR_q80JJfxjGyj1bFBqg.png)
Nodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
## Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
## Directed Metagraph where edges are not used as nodes and could not contain subgraphs
![](https://miro.medium.com/v2/resize:fit:1400/1*xAVf4LeuMHhXynqrwfkNWA.png)
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
## Directed Metagraph where edges are not used as nodes and could contain subgraphs
![](https://miro.medium.com/v2/resize:fit:1400/1*Ra5_LtYGlbTidGn3w8gYEg.png)
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.
## Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
![](https://miro.medium.com/v2/resize:fit:1400/1*1jggQlCU-aYO_wOb2q6EXA.png)
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.
Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose.
-
Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
## The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
## Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like **Lisp** emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
## Lisp: The Earlier AI-Language
**Lisp**, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
## Prolog: Formal Logic and Deductive Reasoning
In the 1970s, **Prolog** was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out **how** to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
## Solvers and Their Role in Complementing LLMs
One of the most powerful features of **Prolog** and similar logic-based systems is their use of **solvers**. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a **solver** can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling **constraint satisfaction problems** — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating **neurosymbolic methods** that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
## Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a **graph of rules**. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a **graph of rules**, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable **modular reasoning**, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of **neural fluency** with **rule-based structure** gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform **deductive reasoning** alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
## Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where **neurosymbolic approaches** shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
**Personal Knowledge Graphs (PKGs)** come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a **graph of rules** that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using **deductive databases** to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
## My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling **knowledge graphs using deductive databases**, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
**Prolog** and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a **graph of rules** that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
## Why It Matters for LLM Employment
Using **neurosymbolic RAG (retrieval-augmented generation)** in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of **neural fluency** with **symbolic accuracy and deductive power** is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
## Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
## Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
## RDF and Advanced Topics
## Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
## Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
## Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
## Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
## Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
## Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
## Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
The temporal semantics and **temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory** and **semantic memory**. And we also have a lot of theories, right?
## Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
> do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
## Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
## Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a **hybrid clock** or some **global clock** that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
## Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
## Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
A state-controlled money supply can influence the development of socialist policies and practices in various ways. Although the relationship is not deterministic, state control over the money supply can contribute to a larger role of the state in the economy and facilitate the implementation of socialist ideals.
## Fiscal Policy Capabilities
When the state manages the money supply, it gains the ability to implement fiscal policies that can lead to an expansion of social programs and welfare initiatives. Funding these programs by creating money can enhance the state's influence over the economy and move it closer to a socialist model. The Soviet Union, for instance, had a centralized banking system that enabled the state to fund massive industrialization and social programs, significantly expanding the state's role in the economy.
## Wealth Redistribution
Controlling the money supply can also allow the state to influence economic inequality through monetary policies, effectively redistributing wealth and reducing income disparities. By implementing low-interest loans or providing financial assistance to disadvantaged groups, the state can narrow the wealth gap and promote social equality, as seen in many European welfare states.
## Central Planning
A state-controlled money supply can contribute to increased central planning, as the state gains more influence over the economy. Central banks, which are state-owned or heavily influenced by the state, play a crucial role in managing the money supply and facilitating central planning. This aligns with socialist principles that advocate for a planned economy where resources are allocated according to social needs rather than market forces.
## Incentives for Staff
Staff members working in state institutions responsible for managing the money supply have various incentives to keep the system going. These incentives include job security, professional expertise and reputation, political alignment, regulatory capture, institutional inertia, and legal and administrative barriers. While these factors can differ among individuals, they can collectively contribute to the persistence of a state-controlled money supply system.
In conclusion, a state-controlled money supply can facilitate the development of socialist policies and practices by enabling fiscal policies, wealth redistribution, and central planning. The staff responsible for managing the money supply have diverse incentives to maintain the system, further ensuring its continuation. However, it is essential to note that many factors influence the trajectory of an economic system, and the relationship between state control over the money supply and socialism is not inevitable.
-
Tutorial feito por nostr:nostr:npub1rc56x0ek0dd303eph523g3chm0wmrs5wdk6vs0ehd0m5fn8t7y4sqra3tk poste original abaixo:
Parte 1 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/263585/tutorial-debloat-de-celulares-android-via-adb-parte-1
Parte 2 : http://xh6liiypqffzwnu5734ucwps37tn2g6npthvugz3gdoqpikujju525yd.onion/index.php/263586/tutorial-debloat-de-celulares-android-via-adb-parte-2
Quando o assunto é privacidade em celulares, uma das medidas comumente mencionadas é a remoção de bloatwares do dispositivo, também chamado de debloat. O meio mais eficiente para isso sem dúvidas é a troca de sistema operacional. Custom Rom’s como LineageOS, GrapheneOS, Iodé, CalyxOS, etc, já são bastante enxutos nesse quesito, principalmente quanto não é instalado os G-Apps com o sistema. No entanto, essa prática pode acabar resultando em problemas indesejados como a perca de funções do dispositivo, e até mesmo incompatibilidade com apps bancários, tornando este método mais atrativo para quem possui mais de um dispositivo e separando um apenas para privacidade.
Pensando nisso, pessoas que possuem apenas um único dispositivo móvel, que são necessitadas desses apps ou funções, mas, ao mesmo tempo, tem essa visão em prol da privacidade, buscam por um meio-termo entre manter a Stock rom, e não ter seus dados coletados por esses bloatwares. Felizmente, a remoção de bloatwares é possível e pode ser realizada via root, ou mais da maneira que este artigo irá tratar, via adb.
## O que são bloatwares?
Bloatware é a junção das palavras bloat (inchar) + software (programa), ou seja, um bloatware é basicamente um programa inútil ou facilmente substituível — colocado em seu dispositivo previamente pela fabricante e operadora — que está no seu dispositivo apenas ocupando espaço de armazenamento, consumindo memória RAM e pior, coletando seus dados e enviando para servidores externos, além de serem mais pontos de vulnerabilidades.
## O que é o adb?
O Android Debug Brigde, ou apenas adb, é uma ferramenta que se utiliza das permissões de usuário shell e permite o envio de comandos vindo de um computador para um dispositivo Android exigindo apenas que a depuração USB esteja ativa, mas também pode ser usada diretamente no celular a partir do Android 11, com o uso do Termux e a depuração sem fio (ou depuração wifi). A ferramenta funciona normalmente em dispositivos sem root, e também funciona caso o celular esteja em Recovery Mode.
Requisitos:
Para computadores:
• Depuração USB ativa no celular;
• Computador com adb;
• Cabo USB;
Para celulares:
• Depuração sem fio (ou depuração wifi) ativa no celular;
• Termux;
• Android 11 ou superior;
Para ambos:
• Firewall NetGuard instalado e configurado no celular;
• Lista de bloatwares para seu dispositivo;
## Ativação de depuração:
Para ativar a Depuração USB em seu dispositivo, pesquise como ativar as opções de desenvolvedor de seu dispositivo, e lá ative a depuração. No caso da depuração sem fio, sua ativação irá ser necessária apenas no momento que for conectar o dispositivo ao Termux.
## Instalação e configuração do NetGuard
O NetGuard pode ser instalado através da própria Google Play Store, mas de preferência instale pela F-Droid ou Github para evitar telemetria.
F-Droid:
https://f-droid.org/packages/eu.faircode.netguard/
Github: https://github.com/M66B/NetGuard/releases
Após instalado, configure da seguinte maneira:
Configurações → padrões (lista branca/negra) → ative as 3 primeiras opções (bloquear wifi, bloquear dados móveis e aplicar regras ‘quando tela estiver ligada’);
Configurações → opções avançadas → ative as duas primeiras (administrar aplicativos do sistema e registrar acesso a internet);
Com isso, todos os apps estarão sendo bloqueados de acessar a internet, seja por wifi ou dados móveis, e na página principal do app basta permitir o acesso a rede para os apps que você vai usar (se necessário). Permita que o app rode em segundo plano sem restrição da otimização de bateria, assim quando o celular ligar, ele já estará ativo.
## Lista de bloatwares
Nem todos os bloatwares são genéricos, haverá bloatwares diferentes conforme a marca, modelo, versão do Android, e até mesmo região.
Para obter uma lista de bloatwares de seu dispositivo, caso seu aparelho já possua um tempo de existência, você encontrará listas prontas facilmente apenas pesquisando por elas. Supondo que temos um Samsung Galaxy Note 10 Plus em mãos, basta pesquisar em seu motor de busca por:
```
Samsung Galaxy Note 10 Plus bloatware list
```
Provavelmente essas listas já terão inclusas todos os bloatwares das mais diversas regiões, lhe poupando o trabalho de buscar por alguma lista mais específica.
Caso seu aparelho seja muito recente, e/ou não encontre uma lista pronta de bloatwares, devo dizer que você acaba de pegar em merda, pois é chato para um caralho pesquisar por cada aplicação para saber sua função, se é essencial para o sistema ou se é facilmente substituível.
### De antemão já aviso, que mais para frente, caso vossa gostosura remova um desses aplicativos que era essencial para o sistema sem saber, vai acabar resultando na perda de alguma função importante, ou pior, ao reiniciar o aparelho o sistema pode estar quebrado, lhe obrigando a seguir com uma formatação, e repetir todo o processo novamente.
## Download do adb em computadores
Para usar a ferramenta do adb em computadores, basta baixar o pacote chamado SDK platform-tools, disponível através deste link: https://developer.android.com/tools/releases/platform-tools. Por ele, você consegue o download para Windows, Mac e Linux.
Uma vez baixado, basta extrair o arquivo zipado, contendo dentro dele uma pasta chamada platform-tools que basta ser aberta no terminal para se usar o adb.
## Download do adb em celulares com Termux.
Para usar a ferramenta do adb diretamente no celular, antes temos que baixar o app Termux, que é um emulador de terminal linux, e já possui o adb em seu repositório. Você encontra o app na Google Play Store, mas novamente recomendo baixar pela F-Droid ou diretamente no Github do projeto.
F-Droid: https://f-droid.org/en/packages/com.termux/
Github: https://github.com/termux/termux-app/releases
## Processo de debloat
Antes de iniciarmos, é importante deixar claro que não é para você sair removendo todos os bloatwares de cara sem mais nem menos, afinal alguns deles precisam antes ser substituídos, podem ser essenciais para você para alguma atividade ou função, ou até mesmo são insubstituíveis.
Alguns exemplos de bloatwares que a substituição é necessária antes da remoção, é o Launcher, afinal, é a interface gráfica do sistema, e o teclado, que sem ele só é possível digitar com teclado externo. O Launcher e teclado podem ser substituídos por quaisquer outros, minha recomendação pessoal é por aqueles que respeitam sua privacidade, como Pie Launcher e Simple Laucher, enquanto o teclado pelo OpenBoard e FlorisBoard, todos open-source e disponíveis da F-Droid.
Identifique entre a lista de bloatwares, quais você gosta, precisa ou prefere não substituir, de maneira alguma você é obrigado a remover todos os bloatwares possíveis, modifique seu sistema a seu bel-prazer. O NetGuard lista todos os apps do celular com o nome do pacote, com isso você pode filtrar bem qual deles não remover.
Um exemplo claro de bloatware insubstituível e, portanto, não pode ser removido, é o com.android.mtp, um protocolo onde sua função é auxiliar a comunicação do dispositivo com um computador via USB, mas por algum motivo, tem acesso a rede e se comunica frequentemente com servidores externos. Para esses casos, e melhor solução mesmo é bloquear o acesso a rede desses bloatwares com o NetGuard.
MTP tentando comunicação com servidores externos:
## Executando o adb shell
No computador
Faça backup de todos os seus arquivos importantes para algum armazenamento externo, e formate seu celular com o hard reset. Após a formatação, e a ativação da depuração USB, conecte seu aparelho e o pc com o auxílio de um cabo USB. Muito provavelmente seu dispositivo irá apenas começar a carregar, por isso permita a transferência de dados, para que o computador consiga se comunicar normalmente com o celular.
Já no pc, abra a pasta platform-tools dentro do terminal, e execute o seguinte comando:
```
./adb start-server
```
O resultado deve ser:
*daemon not running; starting now at tcp:5037
*daemon started successfully
E caso não apareça nada, execute:
```
./adb kill-server
```
E inicie novamente.
Com o adb conectado ao celular, execute:
```
./adb shell
```
Para poder executar comandos diretamente para o dispositivo. No meu caso, meu celular é um Redmi Note 8 Pro, codinome Begonia.
Logo o resultado deve ser:
begonia:/ $
Caso ocorra algum erro do tipo:
adb: device unauthorized.
This adb server’s $ADB_VENDOR_KEYS is not set
Try ‘adb kill-server’ if that seems wrong.
Otherwise check for a confirmation dialog on your device.
Verifique no celular se apareceu alguma confirmação para autorizar a depuração USB, caso sim, autorize e tente novamente. Caso não apareça nada, execute o kill-server e repita o processo.
## No celular
Após realizar o mesmo processo de backup e hard reset citado anteriormente, instale o Termux e, com ele iniciado, execute o comando:
```
pkg install android-tools
```
Quando surgir a mensagem “Do you want to continue? [Y/n]”, basta dar enter novamente que já aceita e finaliza a instalação
Agora, vá até as opções de desenvolvedor, e ative a depuração sem fio. Dentro das opções da depuração sem fio, terá uma opção de emparelhamento do dispositivo com um código, que irá informar para você um código em emparelhamento, com um endereço IP e porta, que será usado para a conexão com o Termux.
Para facilitar o processo, recomendo que abra tanto as configurações quanto o Termux ao mesmo tempo, e divida a tela com os dois app’s, como da maneira a seguir:
Para parear o Termux com o dispositivo, não é necessário digitar o ip informado, basta trocar por “localhost”, já a porta e o código de emparelhamento, deve ser digitado exatamente como informado. Execute:
```
adb pair localhost:porta CódigoDeEmparelhamento
```
De acordo com a imagem mostrada anteriormente, o comando ficaria “adb pair localhost:41255 757495”.
Com o dispositivo emparelhado com o Termux, agora basta conectar para conseguir executar os comandos, para isso execute:
```
adb connect localhost:porta
```
Obs: a porta que você deve informar neste comando não é a mesma informada com o código de emparelhamento, e sim a informada na tela principal da depuração sem fio.
Pronto! Termux e adb conectado com sucesso ao dispositivo, agora basta executar normalmente o adb shell:
```
adb shell
```
Remoção na prática
Com o adb shell executado, você está pronto para remover os bloatwares. No meu caso, irei mostrar apenas a remoção de um app (Google Maps), já que o comando é o mesmo para qualquer outro, mudando apenas o nome do pacote.
Dentro do NetGuard, verificando as informações do Google Maps:
Podemos ver que mesmo fora de uso, e com a localização do dispositivo desativado, o app está tentando loucamente se comunicar com servidores externos, e informar sabe-se lá que peste. Mas sem novidades até aqui, o mais importante é que podemos ver que o nome do pacote do Google Maps é com.google.android.apps.maps, e para o remover do celular, basta executar:
```
pm uninstall –user 0 com.google.android.apps.maps
```
E pronto, bloatware removido! Agora basta repetir o processo para o resto dos bloatwares, trocando apenas o nome do pacote.
Para acelerar o processo, você pode já criar uma lista do bloco de notas com os comandos, e quando colar no terminal, irá executar um atrás do outro.
Exemplo de lista:
Caso a donzela tenha removido alguma coisa sem querer, também é possível recuperar o pacote com o comando:
```
cmd package install-existing nome.do.pacote
```
## Pós-debloat
Após limpar o máximo possível o seu sistema, reinicie o aparelho, caso entre no como recovery e não seja possível dar reboot, significa que você removeu algum app “essencial” para o sistema, e terá que formatar o aparelho e repetir toda a remoção novamente, desta vez removendo poucos bloatwares de uma vez, e reiniciando o aparelho até descobrir qual deles não pode ser removido. Sim, dá trabalho… quem mandou querer privacidade?
Caso o aparelho reinicie normalmente após a remoção, parabéns, agora basta usar seu celular como bem entender! Mantenha o NetGuard sempre executando e os bloatwares que não foram possíveis remover não irão se comunicar com servidores externos, passe a usar apps open source da F-Droid e instale outros apps através da Aurora Store ao invés da Google Play Store.
Referências:
Caso você seja um Australopithecus e tenha achado este guia difícil, eis uma videoaula (3:14:40) do Anderson do canal Ciberdef, realizando todo o processo: http://odysee.com/@zai:5/Como-remover-at%C3%A9-200-APLICATIVOS-que-colocam-a-sua-PRIVACIDADE-E-SEGURAN%C3%87A-em-risco.:4?lid=6d50f40314eee7e2f218536d9e5d300290931d23
Pdf’s do Anderson citados na videoaula: créditos ao anon6837264
http://eternalcbrzpicytj4zyguygpmkjlkddxob7tptlr25cdipe5svyqoqd.onion/file/3863a834d29285d397b73a4af6fb1bbe67c888d72d30/t-05e63192d02ffd.pdf
Processo de instalação do Termux e adb no celular: https://youtu.be/APolZrPHSms
-
nostr:npub1nkmta4dmsa7pj25762qxa6yqxvrhzn7ug0gz5frp9g7p3jdscnhsu049fn added support for [classified listings (NIP-99)](https://github.com/sovbiz/Cypher-Nostr-Edition/commit/cd3bf585c77a85421de031db1d8ebd3911ba670d) about a month ago and recently announced this update that allows for creating/editing listings and blog posts via the dashboard.
In other words, listings created on the website are also able to be viewed and edited on other Nostr apps like Amethyst and Shopstr. Interoperability FTW.
I took some screenshots to give you an idea of how things work.
The [home page](https://cypher.space/) is clean with the ability to search for profiles by name, npub, or Nostr address (NIP-05).
![](https://m.stacker.news/61893)
Clicking login allows signing in with a browser extension.
![](https://m.stacker.news/61894)
The dashboard gives an overview of the amount of notes posted (both short and long form) and products listed.
![](https://m.stacker.news/61895)
Existing blog posts (i.e. long form notes) are synced.
![](https://m.stacker.news/61897)
Same for product listings. There’s a nice interface to create new ones and preview them before publishing.
![](https://m.stacker.news/61898)
![](https://m.stacker.news/61899)
That’s all for now. As you can see, super slick stuff!
Bullish on Cypher.
So much so I had to support the project and buy a subdomain. 😎
https://bullish.cypher.space
originally posted at https://stacker.news/items/760592
-
## "Files" by Google new feature
"Files" by Google added a "feature"... "Smart Search", you can toggle it to OFF and it is highly recommended to do so.
- ![image](https://yakihonne.s3.ap-east-1.amazonaws.com/35f3a26cd1e775ad16dad8cc8e8e9d48259fc8cb46230bc9663c812792ddf231/files/1697708380122-YAKIHONNES3.jpg)
Toggle the Smart Search to OFF, otherwise, google will search and index every picture, video and document in your device, no exceptions, anything you have ever photographed and you forgot, any document you have downloaded or article, etc...
## How this could affect you?
Google is actively combating child abuse and therefore it has built in its "AI" a very aggressive algorithm searching of material that "IT THINKS" is related, therefore the following content could be flagged:
- [ ] Pictures of you and your children in the beach
- [ ] Pictures or videos which are innocent in nature but the "AI" "thinks" are not
- [ ] Articles you may have save for research to write your next essay that have links to flagged information or sites
***The results:***
- [ ] Your google account will be canceled
- [ ] You will be flagged as a criminal across the digital world
You think this is non sense? Think again:
https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
## How to switch it off:
1. Open files by Google
2. Tap on Menu -> Settings
3. Turn OFF Smart Search
## But you can do more for your privacy and the security of your family
1. Stop using google apps, if possible get rid off of Google OS and use Graphene OS
2. Go to Settings -> Apps
3. Search for Files by Google
4. Unistall the app, if you can't disable it
5. Keep doing that with most Google apps that are not a must if you have not switched already to GrapheneOS
***Remember, Google keeps advocating for privacy, but as many others have pointed out repeatedly, they are the first ones lobbying for the removal of your privacy by regulation and draconian laws, their hypocrisy knows no limits***
## Recommendation:
I would assume you have installed F-Droid in your android, or Obtainium if you are more advanced, if so, consider "Simple File Manager Pro" by Tibor Kaputa, this dev has a suite of apps that are basic needs and the best feature in my opinion is that not one of his apps connect to the internet, contacts, gallery, files, phone, etc.
- ![image](https://yakihonne.s3.ap-east-1.amazonaws.com/35f3a26cd1e775ad16dad8cc8e8e9d48259fc8cb46230bc9663c812792ddf231/files/1697709265617-YAKIHONNES3.jpg)
> Note
> As most people, we all love the convenience of technology, it makes our lives easier, however, our safety and our family safety should go first, between technology being miss-used and abused by corporations and cyber-criminals data mining and checking for easy targets to attack for profit, we need to keep our guard up. Learning is key, resist the use of new tech if you do not understand the privacy trade offs, no matter how appealing and convenient it looks like. .
***Please leave your comments with your favorite FOSS Files app!***
-
# The case against edits
Direct edits are a centralizing force on Nostr, a slippery slope that should not be accepted.
Edits are fine in other, more specialized event kinds, but the `kind:1` space shouldn't be compromised with such a push towards centralization, because [`kind:1` is the public square of Nostr, where all focus should be on decentralization and censorship-resistance](cd8ce2b7).
- _Why?_
Edits introduce too much complexity. If edits are widespread, all clients now have to download dozens of extra events at the same time while users are browsing a big feed of notes which are already coming from dozens of different relays using complicated outbox-model-based querying, then for each event they have to open yet another subscription to these relays -- or perform some other complicated batching of subscriptions which then requires more complexity on the event handling side and then when associating these edits with the original events. I can only imagine this will hurt apps performance, but it definitely raises the barrier to entry and thus necessarily decreases Nostr decentralization.
Some clients may be implemneted in way such that they download tons of events and then store them in a local databases, from which they then construct the feed that users see. Such clients may make edits potentially easier to deal with -- but this is hardly an answer to the point above, since such clients are already more complex to implement in the first place.
- _What do you have against complex clients?_
The point is not to say that all clients should be simple, but that it should be simple to write a client -- or at least as simple as physically possible.
You may not be thinking about it, but if you believe in the promise of Nostr then we should expect to see Nostr feeds in many other contexts other than on a big super app in a phone -- we should see Nostr notes being referenced from and injected in unrelated webpages, unrelated apps, hardware devices, comment sections and so on. All these micro-clients will have to implement some complicated edit-fetching logic now?
- _But aren't we already fetching likes and zaps and other things, why not fetch edits too?_
Likes, zaps and other similar things are optional. It's perfectly fine to use Nostr without seeing likes and/or zaps -- and, believe me, it does happen quite a lot. The point is basically that likes or zaps don't affect the content of the main post at all, while edits do.
- _But edits are optional!_
No, they are not optional. If edits become widespread they necessarily become mandatory. Any client that doesn't implement edits will be displaying false information to its users and their experience will be completely broken.
- _That's fine, as people will just move to clients that support edits!_
Exactly, that is what I expect to happen too, and this is why I am saying edits are a centralizing force that we should be fighting against, not embracing.
If you understand that edits are a centralizing force, then you must automatically agree that they aren't a desirable feature, given that if you are reading this now, with Nostr being so small, there is a 100% chance you care about decentralization and you're not just some kind of lazy influencer that is only doing this for money.
- _All other social networks support editing!_
This is not true at all. Bluesky has 10x more users than Nostr and doesn't support edits. Instagram doesn't support editing pictures after they're posted, and doesn't support editing comments. Tiktok doesn't support editing videos or comments after they're posted. YouTube doesn't support editing videos after they're posted. Most famously, email, the most widely used and widespread "social app" out there, does not support edits of any kind. Twitter didn't support edits for the first 15 years of its life, and, although some people complained, it didn't hurt the platform at all -- arguably it benefitted it.
If edits are such a straightforward feature to add that won't hurt performance, that won't introduce complexity, and also that is such an essential feature users could never live without them, then why don't these centralized platforms have edits on everything already? There must be something there.
- _Eventually someone will implement edits anyway, so why bother to oppose edits now?_
Once Nostr becomes big enough, maybe it will be already shielded from such centralizing forces by its sheer volume of users and quantity of clients, maybe not, we will see. All I'm saying is that we shouldn't just push for bad things now just because of a potential future in which they might come.
- _The market will decide what is better._
The market has decided for Facebook, Instagram, Twitter and TikTok. If we were to follow what the market had decided we wouldn't be here, and you wouldn't be reading this post.
- _OK, you have convinced me, edits are not good for the protocol. But what do we do about the users who just want to fix their typos?_
There are many ways. The annotations spec, for example, provides a simple way to append things to a note without being a full-blown edit, and they fall back gracefully to normal replies in clients that don't implement the full annotations spec.
Eventually we could have annotations that are expressed in form of simple (human-readable?) diffs that can be applied directly to the post, but fall back, again, to comments.
Besides these, a very simple idea that wasn't tried yet on Nostr yet is the idea that has been tried for emails and seems to work very well: delaying a post after the "submit" button is clicked and giving the user the opportunity to cancel and edit it again before it is actually posted.
Ultimately, if edits are so necessary, then maybe we could come up with a way to implement edits that is truly optional and falls back cleanly for clients that don't support them directly and don't hurt the protocol very much. Let's think about it and not rush towards defeat.
-
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The [reporting](https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/) purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
> In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
>
> The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
> In a June paper reviewed by[ Reuters](https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/), six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
>
> The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the *Star Trek* episode “[A Taste of Armageddon](https://en.wikipedia.org/wiki/A_Taste_of_Armageddon)“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over [one million open-source LLMs](https://huggingface.co/models) available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are [potentially](https://bigthink.com/business/the-trillion-dollar-ai-race-to-create-digital-god/) trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American[ infrastructure](https://www.nbcnews.com/tech/security/chinese-hackers-cisa-cyber-5-years-us-infrastructure-attack-rcna137706),[ data](https://www.cnn.com/2024/10/05/politics/chinese-hackers-us-telecoms/index.html), and yes, [your credit history](https://thespectator.com/topic/chinese-communist-party-credit-history-equifax/)?
**As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much[ many times](https://consumerchoicecenter.org/made-in-china-sold-in-china/).**
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
## **Open-source matters**
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various [licensing schemes](https://opensource.org/licenses) – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also [open-source](https://github.com/bitcoin), which has allowed [thousands](https://bitcoinmagazine.com/business/bitcoin-is-money-for-enemies) of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
## **Open-source is for your friends, and enemies**
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to[ Hugging Face](https://huggingface.co/), download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put [export controls](https://www.reuters.com/technology/us-lawmakers-unveil-bill-make-it-easier-restrict-exports-ai-models-2024-05-10/) on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see[ The Crypto Wars](https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States)). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
## **Limiting open-source threatens our own advancement**
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “[Hundred Year Marathon](https://www.amazon.com/Hundred-Year-Marathon-Strategy-Replace-Superpower/dp/1250081343)” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with [trillions of dollars’](https://www.economist.com/china/2024/11/04/in-some-areas-of-military-strength-china-has-surpassed-america) worth of investments that span far beyond AI chatbots and skip logic protocols.
The [theft](https://www.technologyreview.com/2023/06/20/1075088/chinese-amazon-seller-counterfeit-lawsuit/) of intellectual property at factories in Shenzhen, or in US courts by [third-party litigation funding](https://nationalinterest.org/blog/techland/litigation-finance-exposes-our-judicial-system-foreign-exploitation-210207) coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
*Originally published on the website of the [Consumer Choice Center](https://consumerchoicecenter.org/open-source-is-for-everyone-even-your-adversaries/).*
-
## Chef's notes
2 cups pureed pumpkin
2 cups white sugar
3 eggs, beaten
1/2 cup olive oil
1 tablespoon cinnamon
1 1/2 teaspoon baking powder
1 teaspoon baking soda
1/2 teaspoon salt
2 1/4 cups flour
1/4 cup chopped pecans
Preheat oven to 350 degrees Fahrenheit. Coat a bread pan with olive oil. Mix all ingredients minus nuts, leaving the flour for last, and adding it in 2 parts. Stir until combined. Pour the batter into the bread pan and sprinkle chopped nuts down the middle of the batter, lenghtwise. Bake for an hour and ten minutes or until a knife insert in the middle of the pan comes out clean. Allow bread to cool in the pan for five mintes, then use knife to loosen the edges of the loaf and pop out of the pan. Rest on a rack or plate until cool enough to slice. Spread bread slices with Plugra butter and serve.
https://cookeatloveshare.com/pumpkin-bread/
## Details
- ⏲️ Prep time: 30 minutes
- 🍳 Cook time: 1 hour and ten minutes
- 🍽️ Servings: 6
## Ingredients
- See Chef's Notes
## Directions
1. See Chef's Notes
-
> ### 第三方API合集:
---
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 |
| --- | --- | --- | --- |
| AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | [官网](https://aihubmix.com?aff=mPS7) |
| OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | [官网](https://openai-hk.com/?i=45878) |
| CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | [官网](https://www.closeai-asia.com/) |
| OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | [官网](https://www.openai-sb.com/) |
` 持续更新。。。`
---
### 推广:
访问不了openai,去`低调云`购买VPN。
官网:https://didiaocloud.xyz
邀请码:`w9AjVJit`
价格低至1元。
-
One year ago I wrote the article [Why Nostr resonates](https://sebastix.nl/blog/why-nostr-resonates/) in Dutch and English after I visited the Bitcoin Amsterdam 2023 conference and the Nostrdam event. It got published at [bitcoinfocus.nl](https://bitcoinfocus.nl/2023/11/02/278-waarom-nostr-resoneert/) (translated in Dutch). The main reason why I wrote that piece is that I felt that my gut feeling was tellinng me that Nostr is going to change many things on the web.
After the article was published, one of the first things I did was setting up this page on my website: [https://sebastix.nl/nostr-research-and-development](https://sebastix.nl/nostr-research-and-development). The page contains this section (which I updated on 31-10-2024):
![](https://nostrver.se/sites/default/files/2024-11/Swf2djYX.png)
One metric I would like to highlight is the number of repositories on Github. Compared to a year ago, there are already more than 1130 repositories now on Github tagged with Nostr. Let's compare this number to other social media protocols and decentralized platforms (24-10-2024):
* Fediverse: 522
* ATProto: 159
* Scuttlebot: 49
* Farcaster: 202
* Mastodon: 1407
* ActivityPub: 444
Nostr is growing. FYI there are many Nostr repositories not hosted on Github, so the total number of Nostr reposities is higher. I know that many devs are using their own Git servers to host it. We're even capable of setting up Nostr native Git repositories (for example, see [https://gitworkshop.dev/repos](https://gitworkshop.dev/repos)). Eventually, Nostr will make Github (and other platforms) absolute.
Let me continue summarizing my personal Nostr highlights of last year.
## Organising Nostr meetups
![](https://nostrver.se/sites/default/files/2024-10/24-03-19%2022-43-27%200698.png)
This is me playing around with the NostrDebug tool showing how you can query data from Nostr relays. Jurjen is standing behind me. He is one of the people I've met this year who I'm sure I will have a long-term friendship with.
## OpenSats grant for Nostr-PHP
![](https://nostrver.se/sites/default/files/2024-07/open_sats_cover.jpeg)
![](https://nostrver.se/sites/default/files/2024-10/Screen-Shot-2024-10-24-22-23-05.07.png)
In December 2023 I submitted my application for a OpenSats grant for the further development of the Nostr-PHP helper library. After some months I finally got the message that my application was approved... When I got the message I was really stoked and excited. It's a great form of appreciation for the work I had done so far and with this grant I get the opportunity to take the work to another higher level. So please check out the work done for so far:
* [https://nostr-php.dev](https://nostr-php.dev)
* [https://github.com/nostrver-se/nostr-php](https://github.com/nostrver-se/nostr-php)
## Meeting Dries
![](https://nostrver.se//sites/default/files/2024-07/24-06-12%2012-41-09%201055.jpg)
One of my goosebumps moments I had in 2022 when I saw that the founder and tech lead of Drupal Dries Buytaert posted '[Nostr, love at first sight](https://dri.es/nostr-love-at-first-sight)' on his blog. These types of moments are very rare moment where two different worlds merge where I wouldn't expect it. Later on I noticed that Dries would come to the yearly Dutch Drupal event. For me this was a perfect opportunity to meet him in person and have some Nostr talks. I admire the work he is doing for Drupal and the community. I hope we can bridge Nostr stuff in some way to Drupal. In general this applies for any FOSS project out there.
[Here](https://sebastix.nl/blog/photodump-and-highlights-drupaljam-2024/) is my recap of that Drupal event.
## Attending Nostriga
![](https://nostrver.se/sites/default/files/2024-08/IMG_1432%20groot.jpeg)
A conference where history is made and written. I felt it immediately at the first sessions I attended. I will never forget the days I had at Nostriga. I don't have the words to describe what it brought to me.
![](https://nostrver.sehttps://nostrver.se//sites/default/files/2024-10/IMG_1429.jpg)
I also pushed myself out of my comfort zone by giving a keynote called 'POSSE with Nostr - how we pivot away from API's with one of Nostr superpowers'. I'm not sure if this is something I would do again, but I've learned a lot from it.
You can find the presentation [here](https://nostriga.nostrver.se/). It is recorded, but I'm not sure if and when it gets published.
## Nostr billboard advertisement
![](https://nostrver.se/sites/default/files/2024-09/DSC02814_0.JPG)
This advertisment was shown on a billboard beside the [A58 highway in The Netherlands](https://www.google.nl/maps/@51.5544315,4.5607291,3a,75y,34.72h,93.02t/data=!3m6!1e1!3m4!1sdQv9nm3J9SdUQCD0caFR-g!2e0!7i16384!8i8192?coh=205409&entry=ttu) from September 2nd till September 16th 2024. You can find all the assets and more footage of the billboard ad here: [https://gitlab.com/sebastix-group/nostr/nostr-ads](https://gitlab.com/sebastix-group/nostr/nostr-ads). My goal was to set an example of how we could promote Nostr in more traditional ways and inspire others to do the same. In Brazil a fundraiser was achieved to do something similar there: [https://geyser.fund/project/nostrifybrazil](https://geyser.fund/project/nostrifybrazil).
## Volunteering at Nostr booths growNostr
![Bitcoin Amsterdam 2024](https://nostrver.se/sites/default/files/2024-10/IMG_1712.jpeg)
This was such a great motivating experience. Attending as a volunteer at the Nostr booth during the Bitcoin Amsterdam 2024 conference. Please read my note with all the lessons I learned [here](https://nostrver.se/note/my-learned-nostr-lessons-nostr-booth-bitcoin-amsterdam-2024).
## The other stuff
* The Nostr related blog articles I wrote past year:
* [**Run a Nostr relay with your own policies**](https://sebastix.nl/blog/run-a-nostr-relay-with-your-own-policies/) (02-04-2024)
* [**Why social networks should be based on commons**](https://sebastix.nl/blog/why-social-networks-should-be-based-on-commons/) (03-01-2024)
* [**How could Drupal adopt Nostr?**](https://sebastix.nl/blog/how-could-drupal-adopt-nostr/) (30-12-2023)
* [**Nostr integration for CCHS.social**](https://sebastix.nl/blog/nostr-integration-for-cchs-social-drupal-cms/) (21-12-2023)
* [https://ccns.nostrver.se](https://ccns.nostrver.se)
CCNS stands for Community Curated Nostr Stuff. At the end of 2023 I started to build this project. I forked an existing Drupal project of mine (https://cchs.social) to create a link aggregation website inspired by stacker.news. At the beginning of 2024 I also joined the TopBuilder 2024 contest which was a productive period getting to know new people in the Bitcoin and Nostr space.
* [https://nuxstr.nostrver.se](https://nuxstr.nostrver.se)
PHP is not my only language I use to build stuff. As a fullstack webdeveloper I also work with Javascript. Many Nostr clients are made with Javascript frameworks or other more client-side focused tools. Vuejs is currently my Javascript framework I'm the most convenient with. With Vuejs I started to tinker around with Nuxt combined with NDK and so I created a starter template for Vue / Nuxt developers.
* [ZapLamp](nostr:npub1nfrsmpqln23ls7y3e4m29c22x3qaq9wmmr7zkfcttty2nk2kd6zs9re52s)
This is a neat DIY package from LNbits. Powered by an Arduino ESP32 dev board it was running a 24/7 livestream on zap.stream at my office. It flashes when you send a zap to the npub of the ZapLamp.
* [https://nosto.re](https://nosto.re)
Since the beginning when the Blossom spec was published by @hzrd49 and @StuartBowman I immediately took the opportunity to tinker with it. I'm also running a relay for transmitting Blossom Nostr events `wss://relay.nosto.re`.
* [Relays I maintain](https://nostrver.se/note/relays-i-maintain)
I really enjoy to tinker with different relays implementations. Relays are the fundamental base layer to let Nostr work.
I'm still sharing my contributions on [https://nostrver.se/](https://nostrver.se/) where I publish my weekly Nostr related stuff I worked on. This website is built with Drupal where I use the Nostr Simple Publish and Nostr long-form content NIP-23 modules to crosspost the notes and long-form content to the Nostr network (like this piece of content you're reading).
![POSSE](https://nostrver.se/sites/default/files/2024-10/Screen-Shot-2024-10-30-23-23-18.png)
## The Nostr is the people
Just like the web, the web is people: [https://www.youtube.com/watch?v=WCgvkslCzTo](https://www.youtube.com/watch?v=WCgvkslCzTo)
> the people on nostr are some of the smartest and coolest i’ve ever got to know. who cares if it doesn’t take over the world. It’s done more than i could ever ask for. - [@jb55](nostr:note1fsfqja9kkvzuhe5yckff3gkkeqe7upxqljg2g4nkjzp5u9y7t25qx43uch)
Here are some Nostriches who I'm happy to have met and who influenced my journey in Nostr in a positive way.
* Jurjen
* Bitpopart
* Arjen
* Jeroen
* Alex Gleason
* Arnold Lubach
* Nathan Day
* Constant
* fiatjaf
* Sync
## Coming year
Generally I will continue doing what I've done last year. Besides the time I spent on Nostr stuff, I'm also very busy with Drupal related work for my customers. I hope I can get the opportunity to work on a paid client project related to Nostr. It will be even better when I can combine my Drupal expertise with Nostr for projects paid by customers.
### Building a new Nostr application
When I look at my Nostr backlog where I just put everything in with ideas and notes, there are quite some interesting concepts there for building new Nostr applications. Filtering out, I think these three are the most exciting ones:
* nEcho, a micro app for optimizing your reach via Nostr (NIP-65)
* Nostrides.cc platform where you can share Nostr activity events (NIP-113)
* A child-friendly video web app with parent-curated content (NIP-71)
### Nostr & Drupal
When working out a new idea for a Nostr client, I'm trying to combine my expertises into one solution. That's why I also build and maintain some Nostr contrib modules for Drupal.
* [Nostr Simple Publish](https://www.drupal.org/project/nostr_simple_publish)
Drupal module to cross-post notes from Drupal to Nostr
* [Nostr long-form content NIP-23](https://www.drupal.org/project/nostr_content_nip23)
Drupal module to cross-post Markdown formatted content from Drupal to Nostr
* [Nostr internet identifier NIP-05](https://www.drupal.org/project/nostr_id_nip05)
Drupal module to setup Nostr internet identifier addresses with Drupal.
* [Nostr NDK](https://drupal.org/project/nostr_dev_kit)
Includes the Javascript library Nostr Dev Kit (NDK) in a Drupal project.
One of my (very) ambitious goals is to build a Drupal powered Nostr (website) package with the following main features:
* Able to login into Drupal with your Nostr keypair
* Cross-post content to the Nostr network
* Fetch your Nostr content from the Nostr content
* Serve as a content management system (CMS) for your Nostr events
* Serve as a framework to build a hybrid Nostr web application
* Run and maintain a Nostr relay with custom policies
* Usable as a feature rich progressive web app
* Use it as a remote signer
These are just some random ideas as my Nostr + Drupal backlog is way longer than this.
### Nostr-PHP
With all the newly added and continues being updated NIPs in the protocol, this helper library will never be finished. As the sole maintainer of this library I would like to invite others to join as a maintainer or just be a contributor to the library. PHP is big on the web, but there are not many PHP developers active yet using Nostr. Also PHP as a programming language is really pushing forward keeping up with the latest innovations.
### Grow Nostr outside the Bitcoin community
We are working out a submission to host a Nostr stand at FOSDEM 2025. If approved, it will be the first time (as far as I know) that Nostr could be present at a conference outside the context of Bitcoin. The audience at FOSDEM is mostly technical oriented, so I'm really curious what type of feedback we will receive.
Let's finish this article with some random Nostr photos from last year. Cheers!
![Nostriches](https://nostrver.se/sites/default/files/inline-images/IMG_1436.jpg)
![Explaining Nostr](https://nostrver.se/sites/default/files/2024-07/Screen-Shot-2024-07-12-15-47-58.52.png)
![](https://nostrver.sehttps://nostrver.se/sites/default/files/2024-10/IMG_0979%20groot.jpeg)
![ZapLamp](https://nostrver.se//sites/default/files/2024-10/IMG_0997%20groot.jpeg)
![With Nathan Day](https://nostrver.se/sites/default/files/2024-10/IMG_0942.PNG)
![Alex Gleason](https://nostrver.se/sites/default/files/2024-10/20240905_alex-gleason.jpeg)
![](https://nostrver.se//sites/default/files/2024-10/IMG_DB4022599FAA-1%20groot.jpeg)
-
**Amber**
[Amber](https://github.com/greenart7c3/Amber) is a Nostr event signer for Android that allows users to securely segregate their private key (nsec) within a single, dedicated application. Designed to function as a NIP-46 signing device, Amber ensures your smartphone can sign events without needing external servers or additional hardware, keeping your private key exposure to an absolute minimum. This approach aligns with the security rationale of NIP-46, which states that each additional system handling private keys increases potential vulnerability. With Amber, no longer do users need to enter their private key into various Nostr applications.
<img src="https://cdn.satellite.earth/b42b649a16b8f51b48f482e304135ad325ec89386b5614433334431985d4d60d.jpg">
Amber is supported by a growing list of apps, including [Amethyst](https://www.amethyst.social/), [0xChat](https://0xchat.com/#/), [Voyage](https://github.com/dluvian/voyage), [Fountain](https://fountain.fm/), and [Pokey](https://github.com/KoalaSat/pokey), as well as any web application that supports NIP-46 NSEC bunkers, such as [Nostr Nests](https://nostrnests.com), [Coracle](https://coracle.social), [Nostrudel](https://nostrudel.ninja), and more. With expanding support, Amber provides an easy solution for secure Nostr key management across numerous platforms.
<img src="https://cdn.satellite.earth/5b5d4fb9925fabb0005eafa291c47c33778840438438679dfad5662a00644c90.jpg">
Amber supports both native and web-based Nostr applications, aiming to eliminate the need for browser extensions or web servers. Key features include offline signing, multiple account support, and NIP-46 compatibility, and includes a simple UI for granular permissions management. Amber is designed to support signing events in the background, enhancing flexibility when you select the "remember my choice" option, eliminating the need to constantly be signing events for applications that you trust. You can download the app from it's [GitHub](https://github.com/greenart7c3/Amber) page, via [Obtainium ](https://github.com/ImranR98/Obtainium)or Zap.store.
To log in with Amber, simply tap the "Login with Amber" button or icon in a supported application, or you can paste the NSEC bunker connection string directly into the login box. For example, use a connection string like this: bunker://npub1tj2dmc4udvgafxxxxxxxrtgne8j8l6rgrnaykzc8sys9mzfcz@relay.nsecbunker.com.
<img src="https://cdn.satellite.earth/ca2156bfa084ee16dceea0739e671dd65c5f8d92d0688e6e59cc97faac199c3b.jpg">
---
**Citrine**
[Citrine](https://github.com/greenart7c3/Citrine) is a Nostr relay built specifically for Android, allowing Nostr clients on Android devices to seamlessly send and receive events through a relay running directly on their smartphone. This mobile relay setup offers Nostr users enhanced flexibility, enabling them to manage, share, and back up all their Nostr data locally on their device. Citrine’s design supports independence and data security by keeping data accessible and under user control.
<img src="https://cdn.satellite.earth/46bbc10ca2efb3ca430fcb07ec3fe6629efd7e065ac9740d6079e62296e39273.jpg">
With features tailored to give users greater command over their data, Citrine allows easy export and import of the database, restoration of contact lists in case of client malfunctions, and detailed relay management options like port configuration, custom icons, user management, and on-demand relay start/stop. Users can even activate TOR access, letting others connect securely to their Nostr relay directly on their phone. Future updates will include automatic broadcasting when the device reconnects to the internet, along with content resolver support to expand its functionality.
Once you have your Citrine relay fully configured, simply add it to the Private and Local relay sections in Amethyst's relay configuration.
<img src="https://cdn.satellite.earth/6ea01b68009b291770d5b11314ccb3d7ba05fe25cb783e6e1ea977bb21d55c09.jpg">
---
**Pokey**
[Pokey](https://github.com/KoalaSat/pokey) for Android is a brand new, real-time notification tool for Nostr. Pokey allows users to receive live updates for their Nostr events and enabling other apps to access and interact with them. Designed for seamless integration within a user's Nostr relays, Pokey lets users stay informed of activity as it happens, with speed and the flexibility to manage which events trigger notifications on their mobile device.
<img src="https://cdn.satellite.earth/62ec76cc36254176e63f97f646a33e2c7abd32e14226351fa0dd8684177b50a2.jpg">
Pokey currently supports connections with Amber, offering granular notification settings so users can tailor alerts to their preferences. Planned features include broadcasting events to other apps, authenticating to relays, built-in Tor support, multi-account handling, and InBox relay management. These upcoming additions aim to make Pokey a fantastic tool for Nostr notifications across the ecosystem.
---
**Zap.store**
[Zap.store](https://github.com/zapstore/zapstore/) is a permissionless app store powered by Nostr and your trusted social graph. Built to offer a decentralized approach to app recommendations, zap.store enables you to check if friends like Alice follow, endorse, or verify an app’s SHA256 hash. This trust-based, social proof model brings app discovery closer to real-world recommendations from friends and family, bypassing centralized app curation. Unlike conventional app stores and other third party app store solutions like Obtainium, zap.store empowers users to see which apps their contacts actively interact with, providing a higher level of confidence and transparency.
<img src="https://cdn.satellite.earth/fd162229a404b317306916ae9f320a7280682431e933795f708d480e15affa23.jpg">
Currently available on Android, zap.store aims to expand to desktop, PWAs, and other platforms soon. You can get started by installing [Zap.store](https://github.com/zapstore/zapstore/) on your favorite Android device, and install all of the applications mentioned above.
---
Android's openness goes hand in hand with Nostr's openness. Enjoy exploring both expanding ecosystems.
-
This update marks a major milestone for the project. I know, with certainty, that MLS messaging over Nostr is going to work. That might sound a little crazy after so many months working on the project, and I was pretty confident, but until you’ve got running code, it’s all conjecture.
Late last week, I released a video of a working prototype of [White Noise](https://github.com/erskingardner/whitenoise) that shows the full flow; creating groups, inviting other users to join those groups, accepting invites, and sending messages back-and-forth. I’m thrilled that I’ve gotten this far but also appalled that it’s taken so long and disgusted at the state of the code in the app (I’ve been told I have unrelenting standards 😅).
If you missed the video last week...
nostr:note125cuk0zetc7sshw52v5zaq9apq3rq7e2x587tr2c96t7z7sjs59svwv0fj
## What's Next?
In this update, I want to cover a few things about how I'm planning to proceed and how I’m splitting code out of the app into libraries that will help other developers implement MLS messaging in their own Nostr clients.
First off, many of you know that I've been building White Noise as a Rust app using the [Tauri](https://tauri.app) framework. The [OpenMLS](https://github.com/openmls/openmls) implementation is also written in Rust (with bindings for many other languages). So, when you hear me talking about library code, think Rust crates for now.
The first library, called [openmls-nostr](https://github.com/erskingardner/openmls-nostr), is an extension/abstraction on top of the openmls implementation of the MLS spec that helps Nostr clients interact more easily with that implementation in a way that feels native to Nostr. Mostly this will be helping developers interact with MLS primitives and ensure that they’re creating, validating, and serializing these objects in the right way at the right times.
The second isn’t a new library as a big contribution to the already excellent [rust-nostr](https://github.com/rust-nostr) library from nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet. The methods that will go in rust-nostr are highly abstracted and based specifically on the requirements of NIP-104. Mostly this will be helping developers to take those MLS primitives and publish or query them as Nostr events at the right times and to/from the right relays.
Most of this code was originally written directly in the White Noise library so this week I've started to pull code for both of those libraries out and move it to its new home. While I’ve been at it, I've been writing some tests and trying to document things.
An unfortunate offshoot of this is that the usable builds of White Noise are going to take a touch longer. I promise it’s still a very high priority but at this point I need to clean a few things up based on what I've learned thus far.
Another thing that is slowing down release is that; behind the scenes of the dev work, I’ve been battling with Apple for nearly 2 months now to get a proper developer team set up so that we can publish the app via TestFlight for MacOS and iOS. I’ve also been recently learning the intricacies of Android publishing (oh my dear god there are so many devices, OS versions, etc.).
With that in mind, if you know anyone who can help get me up to speed on CI/CD, release pipelines, and multi-platform distribution please hit me up. I would love to learn more and hopefully shortcut some of the pain.
Thanks again so much for all the support over the last few months! It means a lot to me and is a huge part of what is keeping me going on this. 🙏
-
# `kind:1` maximalism and the future of other stuff and Nostr decentralization
These two problems exist on Nostr today, and they look unrelated at first:
1. People adding more stuff to `kind:1` notes, such as making them editable, or adding special corky syntax thas has to be parsed and rendered in complicated UIs;
2. The _discovery_ of "other stuff" content (i.e. long-form articles, podcasts, calendar events, livestreams etc) is hard due to the fact that most people only use microblogging clients and they often don't appear there for them.
Point **2** above has 3 different solutions:
- **a.** Just publish everything as `kind:1` notes;
- **b.** Publish different things as different kinds, but make microblogging clients fetch all the event kinds from people you follow, then render them natively or use NIP-31, or NIP-89 to point users to other clients that would render them better;
- **c.** Publish different things as different kinds, and reference them in `kind:1` notes that would act as announcements to these other events, also relying on NIP-31 and NIP-89 for displaying references and recommending other clients.
Solution **a** is obviously very bad, so I won't address it.
For a while I have believed solution **b** was the correct one, and many others seem to tacitly agree with it, given that some clients have been fetching more and more event kinds and going out of their way to render them in the same feed where only `kind:1` notes were originally expected to be.
I don't think clients doing that is necessarily bad, but I do think this have some centralizing effects on the protocol, as it pushes clients to become bigger and bigger, raising the barrier to entry into the `kind:1` realm. And also in the past I have talked about the fact that I disliked that some clients would display my long-form articles as if they were normal `kind:1` notes and just dump them into the feeds of whoever was following me: nostr:nevent1qqsdk90k9k30vtzwpj6grxys9mvsegu5kkwd4jmpyhlmtjnxet2rvggprpmhxue69uhhyetvv9ujumn0wdmksetjv5hxxmmdqy8hwumn8ghj7mn0wd68ytnddaksygpm7rrrljungc6q0tuh5hj7ue863q73qlheu4vywtzwhx42a7j9n5hae35c
These and other reasons have made me switch my preference to solution **c**, as it gives the most flexibility to the publisher: whoever wants to announce stuff so it can be _discovered_ can, whoever doesn't don't have to. And it allows microblogging clients the freedom to render just render tweets and having a straightforward barrier between what they can render and what is just a link to an external app or webapp (of course they can always opt to render the referenced content in-app if they want).
It also makes the case for microapps more evident. If all microblogging clients become superapps that can render recipe events perfectly why would anyone want to use a dedicated recipes app? I guess there are still reasons, but blurring the line between content kinds in superapps would definitely remove some of the reasons and eventually kill all the microapps.
---
That brings us back to point **1** above (the overcomplication of `kind:1` events): if solution **c** is what we're going to, that makes `kind:1` events very special in Nostr, and not just another kind among others. Microblogging clients become the central plaza of Nostr, thus protecting their neutrality and decentralization much more important. Having a lot of clients with different userbases, doing things in slightly different ways, is essential for that decentralization.
It's ok if Nostr ends up having just 2 recipe-sharing clients, but it must have dozens of microblogging clients -- and maybe not even full-blown microblogging clients, but other apps that somehow deal with `kind:1` events in multiple ways. It's ok if implementing a client for public audio-rooms is very hard and complicated, but at the same time it should be very simple to write a client that can render a `kind:1` note referencing an audio-room and linking to that dedicated client.
I hope you got my point and agreed because this article is ended.
-
# 2024年3月
フィリピンのセブ島へ旅行。初海外。
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/719e78eef05b0e776df21676a49717eeb2592d3727db82bdab7a467f98a1287e.webp)
Nostrに投稿したらこんなリプライが
nostr:nevent1qqsff87kdxh6szf9pe3egtruwfz2uw09rzwr6zwpe7nxwtngmagrhhqc2qwq5
nostr:nevent1qqs9c8fcsw0mcrfuwuzceeq9jqg4exuncvhas5lhrvzpedeqhh30qkcstfluj
(ビットコイン関係なく普通の旅行のつもりで行ってた。というか常にビットコインのこと考えてるわけではないんだけど…)
#### そういえばフィリピンでビットコイン決済できるお店って多いのかな?
### 海外でビットコイン決済ってなんかかっこいいな!
## やりたい!
<br>
-----------
<br>
# ビットコイン決済してみよう! in セブ島
[BTCMap](https://btcmap.org/map#14/10.31403/123.91797) でビットコイン決済できるところを探す
本場はビットコインアイランドと言われてるボラカイ島みたいだけど
セブにもそれなりにあった!
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/887ca0607c2a8a79e2afec917bbda57211d9808526035bdfc7b1c67ebf91ecea.webp)
なんでもいいからビットコイン決済したいだけなので近くて買いやすい店へ
いざタピオカミルクティー屋!
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/35acbe3b9eeb96c9b4f32364d38efe4b9199f202dbb9c6385fadb254caf17341.webp)
ちゃんとビットコインのステッカーが貼ってある!
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/e547890b0f691dbc03b237b99618fcf90171ba11b686b7ba64839059845de9a4.webp)
つたない英語とGoogle翻訳を使ってビットコイン決済できるか店員に聞いたら
### 店員「ビットコインで支払いはできません」
(えーーーー、なんで…ステッカー貼ってあるやん…。)
まぁなんか知らんけどできないらしい。
店員に色々質問したかったけど質問する英語力もないのでする気が起きなかった
結局、せっかく店まで足を運んだので普通に現金でタピオカミルクティーを買った
<br>
タピオカミルクティー
話題になってた時も特に興味なくて飲んでなかったので、これが初タピオカミルクティーになった
法定通貨の味がした。
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/81d71fd8abd965e547ab0440aa3e23aad1fa5cd5a5a40972318c110a5a90e8db.webp)
### どこでもいいからなんでもいいから
### 海外でビットコイン決済してみたい
<br>
-----------
<br>
# ビットコイン決済させてくれ! in ボラカイ島
ビットコインアイランドと呼ばれるボラカイ島はめちゃくちゃビットコイン決済できるとこが多いらしい
でもやめてしまった店も多いらしい
でも300もあったならいくつかはできるとこあるやろ!
nostr:nevent1qqsw0n6utldy6y970wcmc6tymk20fdjxt6055890nh8sfjzt64989cslrvd9l
行くしかねぇ!
<br>
## ビットコインアイランドへ
フィリピンの国内線だぁ
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/38268af6f26d07d02e84e3aff2b45f99927f08fb71cf1863997138fb81bcd92e.webp)
```
行き方:
Mactan-Cebu International Airport
↓飛行機
Godofredo P. Ramos Airport (Caticlan International Airport, Boracay Airport)
↓バスなど
Caticlan フェリーターミナル
↓船
ボラカイ島
料金:
飛行機(受託手荷物付き)
往復 21,000円くらい
空港~ボラカイ島のホテルまで(バス、船、諸経費)
往復 3,300円くらい
(klookからSouthwest Toursを利用)
このページが色々詳しい
https://smaryu.com/column/d/91761/
```
空港おりたらSouthwestのバスに乗る
事前にネットで申し込みをしている場合は5番窓口へ
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/b857072d066cc7ca91700526e5307d08c5a46282f66136821aafd8bed050cd12.webp)
港!
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/23a6b24b05c9dce9692a16b6ea3d57cb2c6ce8a21a97a52cb82b028a2c6dc2b0.webp)
船!(めっちゃ速い)
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/48c88b8b8804459272c113413334307d4e8ade7532f1ad02ee7d6955909e6e45.webp)
ボラカイついた!
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/bcf984d0fe5c3639a386dfa3e10c21f8409f1b7da38a8f3ec291ab931bbd83d0.webp)
### ボラカイ島の移動手段
セブの移動はgrabタクシーが使えるがボラカイにはない。
ネットで検索するとトライシクルという三輪タクシーがおすすめされている。
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/d9ea88c9e2a9a713fe4701dfb31e53f2cd88ed40461e20c5037d3e350aa0d2bc.webp)
(トライシクル:開放的で風がきもちいい)
トライシクルの欠点はふっかけられるので値切り交渉をしないといけないところ。
最初に300phpくらいを提示され、行き先によるけど150phpくらいまでは下げられる。
これはこれで楽しい値切り交渉だけど、個人的にはトライシクルよりバスの方が気楽。
Hop On Hop Off バス:
https://www.hohoboracay.com/pass.php
一日乗り放題250phpなので往復や途中でどこか立ち寄ったりを考えるとお得。
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/295c035a0ab2f48e5dc4b9493f00b9726b595437c76623a34e7fbd83fa736114.webp)
バスは現金が使えないので事前にどこかでカードを買うか車内で買う。
私は何も知らずに乗って車内で乗務員さんから現金でカードを買った。
バスは狭い島内を数本がグルグル巡回してるので20~30分に1本くらいは来るイメージ。
逆にトライシクルは待たなくても捕まえればすぐに乗れるところがいいところかもしれない。
<br>
## 現実
ボラカイ島 BTC Map
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/176386a17696966fc886f3c5a36c18b42a2e5a8060ed164f64f24bd85e66abe7.webp)
BTC決済できるとこめっちゃある
<br>
さっそく店に行く!
「bitcoin accepted here」のステッカーを見つける!
店員にビットコイン支払いできるか聞く!
できないと言われる!
<br>
もう一軒行く
「bitcoin accepted here」のステッカーを見つける
店員にビットコイン支払いできるか聞く
できないと言われる
<br>
5件くらいは回った
全部できない!
<br>
悲しい
<br>
で、ネットでビットコインアイランドで検索してみると
旅行日の一か月前くらいにアップロードされた動画があったので見てみた
要約
- ビットコイン決済はpouch.phというスタートアップ企業がボラカイ島の店にシステムを導入した
- ビットコインアイランドとすることで観光客が10%~30%増加つまり数百~千人程度のビットコインユーザーが来ると考えた
- しかし実際には3~5人だった
- 結果的に200の店舗がビットコイン決済を導入しても使われたのはごく一部だった
- ビットコイン決済があまり使われないので店員がやり方を忘れてしまった
- 店は関心を失いpouchのアプリを消した
https://youtu.be/uaqx6794ipc?si=Afq58BowY1ZrkwaQ
<iframe width="560" height="315" src="https://www.youtube.com/embed/uaqx6794ipc?si=S3kf0K49Z4NASxJt&start=254" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
なるほどね~
しゃあないわ
<br>
## 聖地巡礼
動画内でpouchのオフィスだったところが紹介されていた
これは半年以上前の画像らしい
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/710c2b2fe778836e16c363942e903c9678acecc498ca7065321c312004be2e62.webp)
現在はオフィスが閉鎖されビットコインの看板は色あせている
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/41defdccaea551120ea29f861d3d864e78fe999efbdf7057946696b30bf03856.webp)
おもしろいからここに行ってみよう!となった
で行ってみた
看板の色、更に薄くなってね!?
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/b6fc7740d2915ff9476392b87a06f47594426836f22cfb8a93742b092f11c6cc.webp)
記念撮影
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/599c68f3168b556e5a6cb59a0a64d407efb4cf1f5a1f8a817d6c26004c139a41.webp)
これはこれで楽しかった
場所はこの辺
https://maps.app.goo.gl/WhpEV35xjmUw367A8
ボラカイ島の中心部の結構いいとこ
みんな~ビットコイン(の残骸)の聖地巡礼、行こうぜ!
<br>
## 最後の店
Nattoさんから情報が
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">なんかあんまりネットでも今年になってからの情報はないような…<a href="https://t.co/hiO2R28sfO">https://t.co/hiO2R28sfO</a><br><br>ここは比較的最近…?<a href="https://t.co/CHLGZuUz04">https://t.co/CHLGZuUz04</a></p>— Natto (@madeofsoya) <a href="https://twitter.com/madeofsoya/status/1771219778906014156?ref_src=twsrc%5Etfw">March 22, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
もうこれで最後だと思ってダメもとで行ってみた
なんだろうアジア料理屋さん?
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/abadb1acbc58677a0932be7387566d0b6fc4b6f1ae05327290dd6d97a77e6d5d.webp)
もはや信頼度0の「bitcoin accepted here」
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/7730bb2fbd31d887470b0cfcb8ad3113dc43f228b5a7333b90006cc27e6d3b3d.webp)
ビットコイン払いできますか?
店員「できますよ」
え?ほんとに?ビットコイン払いできる?
店員「できます」
できる!!!!
なんかできるらしい。
<br>
適当に商品を注文して
印刷されたQRコードを出されたので読み取る
ここでスマートに決済できればよかったのだが結構慌てた
自分は英語がわからないし相手はビットコインがわからない
それにビットコイン決済は日本で1回したことがあるだけだった
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/db195f682d3ea05a59cb0aac1046464524ba64cef672a8f4e24b013b641dffb3.webp)
どうもライトニングアドレスのようだ
送金額はこちらで指定しないといけない
店員はフィリピンペソ建ての金額しか教えてくれない
何sats送ればいいのか分からない
ここでめっちゃ混乱した
でもウォレットの設定変えればいいと気付いた
普段円建てにしているのをフィリピンペソ建てに変更すればいいだけだった
設定を変更したら相手が提示している金額を入力して送金
送金は2、3秒で完了した
<br>
やった!
### 海外でビットコイン決済したぞ!
ログ
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/4cf343b08d0a8e322496341e74369420ada2b1b12dba57490770d4118f3f859f.webp)
PORK CHAR SIU BUN とかいうやつを買った
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/c9bce1755a505a89a0d15015409c1621697a4fc69acdb6941933ca4f8b10be20.webp)
普通にめっちゃおいしかった
なんかビットコイン決済できることにビビッて焦って一品しか注文しなかったけどもっと頼めばよかった
ここです。みなさん行ってください。
Bunbun Boracay
https://maps.app.goo.gl/DX8UWM8Y6sEtzYyK6
### めでたしめでたし
<br>
-----------
<br>
# 以下、普通の観光写真
## セブ島
ジンベエザメと泳いだ
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/7f36237e8e77af68f9db2aefe4147358c4787daf5b1de47c0cdde8f144154cb6.webp)
スミロン島でシュノーケリング
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/26e4694354a1cf8a0c7a653ed12e65d128f5099ff7f023f06f8acdd2c020fc4d.webp)
市場の路地裏のちょっとしたダウンタウン?スラム?をビビりながら歩いた
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/97f4d91f7d886cf1bbed997174f77496db297838c73d1a2718da2be78805c76f.webp)
## ボホール島
なんか変な山
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/2ef98d4c7a32c55a94d5e8ac41038cdf989cbeeca472727b207826d16a1f10e8.webp)
メガネザル
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/1c2fea1b530f1f3fe6eb7258900d3280b9eed779e192d744f77241a9c254e43f.webp)
現地の子供が飛び込みを披露してくれた
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/b58e0dc39bd2000a66298020e2f36980eb0d73d89d7b23f6840e22d36a68f59c.webp)
## ボラカイ島
ビーチ
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/87186f195746b260aab222a5e3b0dde251b0afc0b6cd77dd35e09857cba47201.webp)
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/5a9c736d0a40ed11714bf389142274e5f2cc679427aedfbad74c2c8bc7116382.webp)
夕日
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/e2505e1d99baf628952c6f2078509b044c298c05300255fe65cab138aa3b6b52.webp)
藻
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/988d4ce088a785a7a747dfbcb0b5793a005039019cef81d678b1f00b41dfe4d0.webp)
ボラカイ島にはいくつかビーチがあって宿が多いところに近い南西のビーチ、ホワイトビーチは藻が多かった(時期によるかも)
北側のプカシェルビーチは全然藻もなく、水も綺麗でめちゃくちゃよかった
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/2f0d0294ca94cd1da19cfcc905fb0ab96a77599a31d71715d570aac098e8045f.webp)
プカシェルビーチ
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/dd285da3130702ddc899cdff1555c64085743b3bdc7ec9635e51ffbc222b9005.webp)
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/079b4ac62d38122bca59f5cc24a719facdc72f0019639e7bb5506386b19aaf10.webp)
![image](https://nostrcheck.me/media/ec42c765418b3db9c85abff3a88f4a3bbe57535eebbdc54522041fa5328c0600/8e36df3fd0468ecd7ab013e560a71680d5cdf7e952a8e03868265324a6ba8794.webp)
<br>
-----------
<br>
# おわり!
-
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148821549-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010191703)
**สวัสดีทุกคนบน Nostr ครับ** รวมไปถึง **watchers**และ **ผู้ติดตาม**ของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
**เมื่อวันที่ 16 กย.2024** มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148088676-YAKIHONNES3.PNG)
### นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148348755-YAKIHONNES3.jpg)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint **0.15 - 0.2 ETH** … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148387032-YAKIHONNES3.jpg)
- Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
1. ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
2. ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
3. ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
4. ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT **0.2 ETH**
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
**Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก** เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea
แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง
ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ
ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ
เขา : 2ETH นี่มัน 5000$ เลยนะ
ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้
เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู
ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย
เขา : จะเอา address เขาไปทำไม
ผม : ตัดจบ รำคาญ ไม่ขายให้ละ
เขา : 2ETH = 5000 USD เลยนะ
ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
-----------
### ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148837871-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010196295)
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH **"code is law"** smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
## Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ
- Anime girl fanarts : [HikariHarmony](https://linktr.ee/hikariharmonypatreon)
- [HikariHarmony on Nostr](https://shorturl.at/I8Nu4)
- General art : [KeshikiRakuen](https://linktr.ee/keshikirakuen)
- KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148821549-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010191703)
**Hello everyone on Nostr** and all my **watchers**and **followers**from DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
**On September 16**, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148088676-YAKIHONNES3.PNG)
### Here’s a brief overview of what happened
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148348755-YAKIHONNES3.jpg)
- The first scammer selected the art they wanted to buy and offered a high price for each piece.
They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later.
The minting gas fees were quite expensive, ranging from **0.15 to 0.2 ETH**
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148387032-YAKIHONNES3.jpg)
- The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose.
Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
1. I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
2. I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
3. I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
4. I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of **0.2 ETH**. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
**This raised a red flag**, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms
If you find yourself in a similar situation, be aware that scammers may be targeting you.
-----------
### My Perspective why I Believe This is a Scam and What the Scammers Gain
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148837871-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010196295)
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time
Take care
## Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery
- Anime girl fanarts : [HikariHarmony](https://linktr.ee/hikariharmonypatreon)
- [HikariHarmony on Nostr](https://shorturl.at/I8Nu4)
- General art : [KeshikiRakuen](https://linktr.ee/keshikirakuen)
-
O que é Cwtch?
Cwtch (/kʊtʃ/ - uma palavra galesa que pode ser traduzida aproximadamente como “um abraço que cria um lugar seguro”) é um protocolo de mensagens multipartidário descentralizado, que preserva a privacidade, que pode ser usado para construir aplicativos resistentes a metadados.
Como posso pronunciar Cwtch?
Como "kutch", para rimar com "butch".
Descentralizado e Aberto : Não existe “serviço Cwtch” ou “rede Cwtch”. Os participantes do Cwtch podem hospedar seus próprios espaços seguros ou emprestar sua infraestrutura para outras pessoas que buscam um espaço seguro. O protocolo Cwtch é aberto e qualquer pessoa é livre para criar bots, serviços e interfaces de usuário e integrar e interagir com o Cwtch.
Preservação de privacidade : toda a comunicação no Cwtch é criptografada de ponta a ponta e ocorre nos serviços cebola Tor v3.
Resistente a metadados : O Cwtch foi projetado de forma que nenhuma informação seja trocada ou disponibilizada a ninguém sem seu consentimento explícito, incluindo mensagens durante a transmissão e metadados de protocolo
Uma breve história do bate-papo resistente a metadados
Nos últimos anos, a conscientização pública sobre a necessidade e os benefícios das soluções criptografadas de ponta a ponta aumentou com aplicativos como Signal , Whatsapp e Wire. que agora fornecem aos usuários comunicações seguras.
No entanto, essas ferramentas exigem vários níveis de exposição de metadados para funcionar, e muitos desses metadados podem ser usados para obter detalhes sobre como e por que uma pessoa está usando uma ferramenta para se comunicar.
Uma ferramenta que buscou reduzir metadados é o Ricochet lançado pela primeira vez em 2014. Ricochet usou os serviços cebola Tor v2 para fornecer comunicação criptografada segura de ponta a ponta e para proteger os metadados das comunicações.
Não havia servidores centralizados que auxiliassem no roteamento das conversas do Ricochet. Ninguém além das partes envolvidas em uma conversa poderia saber que tal conversa está ocorrendo.
Ricochet tinha limitações; não havia suporte para vários dispositivos, nem existe um mecanismo para suportar a comunicação em grupo ou para um usuário enviar mensagens enquanto um contato está offline.
Isto tornou a adoção do Ricochet uma proposta difícil; mesmo aqueles em ambientes que seriam melhor atendidos pela resistência aos metadados, sem saber que ela existe.
Além disso, qualquer solução para comunicação descentralizada e resistente a metadados enfrenta problemas fundamentais quando se trata de eficiência, privacidade e segurança de grupo conforme definido pelo consenso e consistência da transcrição.
Alternativas modernas ao Ricochet incluem Briar , Zbay e Ricochet Refresh - cada ferramenta procura otimizar para um conjunto diferente de compensações, por exemplo, Briar procura permitir que as pessoas se comuniquem mesmo quando a infraestrutura de rede subjacente está inoperante, ao mesmo tempo que fornece resistência à vigilância de metadados.
O projeto Cwtch começou em 2017 como um protocolo de extensão para Ricochet, fornecendo conversas em grupo por meio de servidores não confiáveis, com o objetivo de permitir aplicativos descentralizados e resistentes a metadados como listas compartilhadas e quadros de avisos.
Uma versão alfa do Cwtch foi lançada em fevereiro de 2019 e, desde então, a equipe do Cwtch dirigida pela OPEN PRIVACY RESEARCH SOCIETY conduziu pesquisa e desenvolvimento em cwtch e nos protocolos, bibliotecas e espaços de problemas subjacentes.
Modelo de Risco.
Sabe-se que os metadados de comunicações são explorados por vários adversários para minar a segurança dos sistemas, para rastrear vítimas e para realizar análises de redes sociais em grande escala para alimentar a vigilância em massa. As ferramentas resistentes a metadados estão em sua infância e faltam pesquisas sobre a construção e a experiência do usuário de tais ferramentas.
https://nostrcheck.me/media/public/nostrcheck.me_9475702740746681051707662826.webp
O Cwtch foi originalmente concebido como uma extensão do protocolo Ricochet resistente a metadados para suportar comunicações assíncronas de grupos multiponto por meio do uso de infraestrutura anônima, descartável e não confiável.
Desde então, o Cwtch evoluiu para um protocolo próprio. Esta seção descreverá os vários riscos conhecidos que o Cwtch tenta mitigar e será fortemente referenciado no restante do documento ao discutir os vários subcomponentes da Arquitetura Cwtch.
Modelo de ameaça.
É importante identificar e compreender que os metadados são omnipresentes nos protocolos de comunicação; é de facto necessário que tais protocolos funcionem de forma eficiente e em escala. No entanto, as informações que são úteis para facilitar peers e servidores também são altamente relevantes para adversários que desejam explorar tais informações.
Para a definição do nosso problema, assumiremos que o conteúdo de uma comunicação é criptografado de tal forma que um adversário é praticamente incapaz de quebrá-lo veja tapir e cwtch para detalhes sobre a criptografia que usamos, e como tal nos concentraremos em o contexto para os metadados de comunicação.
Procuramos proteger os seguintes contextos de comunicação:
• Quem está envolvido em uma comunicação? Pode ser possível identificar pessoas ou simplesmente identificadores de dispositivos ou redes. Por exemplo, “esta comunicação envolve Alice, uma jornalista, e Bob, um funcionário público”.
• Onde estão os participantes da conversa? Por exemplo, “durante esta comunicação, Alice estava na França e Bob estava no Canadá”.
• Quando ocorreu uma conversa? O momento e a duração da comunicação podem revelar muito sobre a natureza de uma chamada, por exemplo, “Bob, um funcionário público, conversou com Alice ao telefone por uma hora ontem à noite. Esta é a primeira vez que eles se comunicam.” *Como a conversa foi mediada? O fato de uma conversa ter ocorrido por meio de um e-mail criptografado ou não criptografado pode fornecer informações úteis. Por exemplo, “Alice enviou um e-mail criptografado para Bob ontem, enquanto eles normalmente enviam apenas e-mails de texto simples um para o outro”.
• Sobre o que é a conversa? Mesmo que o conteúdo da comunicação seja criptografado, às vezes é possível derivar um contexto provável de uma conversa sem saber exatamente o que é dito, por exemplo, “uma pessoa ligou para uma pizzaria na hora do jantar” ou “alguém ligou para um número conhecido de linha direta de suicídio na hora do jantar”. 3 horas da manhã."
Além das conversas individuais, também procuramos defender-nos contra ataques de correlação de contexto, através dos quais múltiplas conversas são analisadas para obter informações de nível superior:
• Relacionamentos: Descobrir relações sociais entre um par de entidades analisando a frequência e a duração de suas comunicações durante um período de tempo. Por exemplo, Carol e Eve ligam uma para a outra todos os dias durante várias horas seguidas.
• Cliques: Descobrir relações sociais entre um grupo de entidades que interagem entre si. Por exemplo, Alice, Bob e Eva se comunicam entre si.
• Grupos vagamente conectados e indivíduos-ponte: descobrir grupos que se comunicam entre si através de intermediários, analisando cadeias de comunicação (por exemplo, toda vez que Alice fala com Bob, ela fala com Carol quase imediatamente depois; Bob e Carol nunca se comunicam).
• Padrão de Vida: Descobrir quais comunicações são cíclicas e previsíveis. Por exemplo, Alice liga para Eve toda segunda-feira à noite por cerca de uma hora.
Ataques Ativos
Ataques de deturpação.
O Cwtch não fornece registro global de nomes de exibição e, como tal, as pessoas que usam o Cwtch são mais vulneráveis a ataques baseados em declarações falsas, ou seja, pessoas que fingem ser outras pessoas:
O fluxo básico de um desses ataques é o seguinte, embora também existam outros fluxos:
•Alice tem um amigo chamado Bob e outro chamado Eve
• Eve descobre que Alice tem um amigo chamado Bob
• Eve cria milhares de novas contas para encontrar uma que tenha uma imagem/chave pública semelhante à de Bob (não será idêntica, mas pode enganar alguém por alguns minutos)
• Eve chama essa nova conta de "Eve New Account" e adiciona Alice como amiga.
• Eve então muda seu nome em "Eve New Account" para "Bob"
• Alice envia mensagens destinadas a "Bob" para a conta falsa de Bob de Eve
Como os ataques de declarações falsas são inerentemente uma questão de confiança e verificação, a única maneira absoluta de evitá-los é os usuários validarem absolutamente a chave pública. Obviamente, isso não é o ideal e, em muitos casos, simplesmente não acontecerá .
Como tal, pretendemos fornecer algumas dicas de experiência do usuário na interface do usuário para orientar as pessoas na tomada de decisões sobre confiar em contas e/ou distinguir contas que possam estar tentando se representar como outros usuários.
Uma nota sobre ataques físicos
A Cwtch não considera ataques que exijam acesso físico (ou equivalente) à máquina do usuário como praticamente defensáveis. No entanto, no interesse de uma boa engenharia de segurança, ao longo deste documento ainda nos referiremos a ataques ou condições que exigem tal privilégio e indicaremos onde quaisquer mitigações que implementámos falharão.
Um perfil Cwtch.
Os usuários podem criar um ou mais perfis Cwtch. Cada perfil gera um par de chaves ed25519 aleatório compatível com Tor.
Além do material criptográfico, um perfil também contém uma lista de Contatos (outras chaves públicas do perfil Cwtch + dados associados sobre esse perfil, como apelido e (opcionalmente) mensagens históricas), uma lista de Grupos (contendo o material criptográfico do grupo, além de outros dados associados, como apelido do grupo e mensagens históricas).
Conversões entre duas partes: ponto a ponto
https://nostrcheck.me/media/public/nostrcheck.me_2186338207587396891707662879.webp
Para que duas partes participem de uma conversa ponto a ponto, ambas devem estar on-line, mas apenas uma precisa estar acessível por meio do serviço Onion. Por uma questão de clareza, muitas vezes rotulamos uma parte como “ponto de entrada” (aquele que hospeda o serviço cebola) e a outra parte como “ponto de saída” (aquele que se conecta ao serviço cebola).
Após a conexão, ambas as partes adotam um protocolo de autenticação que:
• Afirma que cada parte tem acesso à chave privada associada à sua identidade pública.
• Gera uma chave de sessão efêmera usada para criptografar todas as comunicações futuras durante a sessão.
Esta troca (documentada com mais detalhes no protocolo de autenticação ) é negável offline , ou seja, é possível para qualquer parte falsificar transcrições desta troca de protocolo após o fato e, como tal - após o fato - é impossível provar definitivamente que a troca aconteceu de forma alguma.
Após o protocolo de autenticação, as duas partes podem trocar mensagens livremente.
Conversas em Grupo e Comunicação Ponto a Servidor
Ao iniciar uma conversa em grupo, é gerada uma chave aleatória para o grupo, conhecida como Group Key. Todas as comunicações do grupo são criptografadas usando esta chave. Além disso, o criador do grupo escolhe um servidor Cwtch para hospedar o grupo. Um convite é gerado, incluindo o Group Key, o servidor do grupo e a chave do grupo, para ser enviado aos potenciais membros.
Para enviar uma mensagem ao grupo, um perfil se conecta ao servidor do grupo e criptografa a mensagem usando a Group Key, gerando também uma assinatura sobre o Group ID, o servidor do grupo e a mensagem. Para receber mensagens do grupo, um perfil se conecta ao servidor e baixa as mensagens, tentando descriptografá-las usando a Group Key e verificando a assinatura.
Detalhamento do Ecossistema de Componentes
O Cwtch é composto por várias bibliotecas de componentes menores, cada uma desempenhando um papel específico. Algumas dessas bibliotecas incluem:
- abertoprivacidade/conectividade: Abstração de rede ACN, atualmente suportando apenas Tor.
- cwtch.im/tapir: Biblioteca para construção de aplicativos p2p em sistemas de comunicação anônimos.
- cwtch.im/cwtch: Biblioteca principal para implementação do protocolo/sistema Cwtch.
- cwtch.im/libcwtch-go: Fornece ligações C para Cwtch para uso em implementações de UI.
TAPIR: Uma Visão Detalhada
Projetado para substituir os antigos canais de ricochete baseados em protobuf, o Tapir fornece uma estrutura para a construção de aplicativos anônimos.
Está dividido em várias camadas:
• Identidade - Um par de chaves ed25519, necessário para estabelecer um serviço cebola Tor v3 e usado para manter uma identidade criptográfica consistente para um par.
• Conexões – O protocolo de rede bruto que conecta dois pares. Até agora, as conexões são definidas apenas através do Tor v3 Onion Services.
• Aplicativos - As diversas lógicas que permitem um determinado fluxo de informações em uma conexão. Os exemplos incluem transcrições criptográficas compartilhadas, autenticação, proteção contra spam e serviços baseados em tokens. Os aplicativos fornecem recursos que podem ser referenciados por outros aplicativos para determinar se um determinado peer tem a capacidade de usar um determinado aplicativo hospedado.
• Pilhas de aplicativos - Um mecanismo para conectar mais de um aplicativo, por exemplo, a autenticação depende de uma transcrição criptográfica compartilhada e o aplicativo peer cwtch principal é baseado no aplicativo de autenticação.
Identidade.
Um par de chaves ed25519, necessário para estabelecer um serviço cebola Tor v3 e usado para manter uma identidade criptográfica consistente para um peer.
InitializeIdentity - de um par de chaves conhecido e persistente:i,I
InitializeEphemeralIdentity - de um par de chaves aleatório: ie,Ie
Aplicativos de transcrição.
Inicializa uma transcrição criptográfica baseada em Merlin que pode ser usada como base de protocolos baseados em compromisso de nível superior
O aplicativo de transcrição entrará em pânico se um aplicativo tentar substituir uma transcrição existente por uma nova (aplicando a regra de que uma sessão é baseada em uma e apenas uma transcrição).
Merlin é uma construção de transcrição baseada em STROBE para provas de conhecimento zero. Ele automatiza a transformação Fiat-Shamir, para que, usando Merlin, protocolos não interativos possam ser implementados como se fossem interativos.
Isto é significativamente mais fácil e menos sujeito a erros do que realizar a transformação manualmente e, além disso, também fornece suporte natural para:
• protocolos multi-round com fases alternadas de commit e desafio;
• separação natural de domínios, garantindo que os desafios estejam vinculados às afirmações a serem provadas;
• enquadramento automático de mensagens, evitando codificação ambígua de dados de compromisso;
• e composição do protocolo, usando uma transcrição comum para vários protocolos.
Finalmente, o Merlin também fornece um gerador de números aleatórios baseado em transcrição como defesa profunda contra ataques de entropia ruim (como reutilização de nonce ou preconceito em muitas provas). Este RNG fornece aleatoriedade sintética derivada de toda a transcrição pública, bem como dos dados da testemunha do provador e uma entrada auxiliar de um RNG externo.
Conectividade
Cwtch faz uso do Tor Onion Services (v3) para todas as comunicações entre nós.
Fornecemos o pacote openprivacy/connectivity para gerenciar o daemon Tor e configurar e desmontar serviços cebola através do Tor.
Criptografia e armazenamento de perfil.
Os perfis são armazenados localmente no disco e criptografados usando uma chave derivada de uma senha conhecida pelo usuário (via pbkdf2).
Observe que, uma vez criptografado e armazenado em disco, a única maneira de recuperar um perfil é recuperando a senha - como tal, não é possível fornecer uma lista completa de perfis aos quais um usuário pode ter acesso até inserir uma senha.
Perfis não criptografados e a senha padrão
Para lidar com perfis "não criptografados" (ou seja, que não exigem senha para serem abertos), atualmente criamos um perfil com uma senha codificada de fato .
Isso não é o ideal, preferiríamos confiar no material de chave fornecido pelo sistema operacional, de modo que o perfil fosse vinculado a um dispositivo específico, mas esses recursos são atualmente uma colcha de retalhos - também notamos, ao criar um perfil não criptografado, pessoas que usam Cwtch estão explicitamente optando pelo risco de que alguém com acesso ao sistema de arquivos possa descriptografar seu perfil.
Vulnerabilidades Relacionadas a Imagens e Entrada de Dados
Imagens Maliciosas
O Cwtch enfrenta desafios na renderização de imagens, com o Flutter utilizando Skia, embora o código subjacente não seja totalmente seguro para a memória.
Realizamos testes de fuzzing nos componentes Cwtch e encontramos um bug de travamento causado por um arquivo GIF malformado, levando a falhas no kernel. Para mitigar isso, adotamos a política de sempre habilitar cacheWidth e/ou cacheHeight máximo para widgets de imagem.
Identificamos o risco de imagens maliciosas serem renderizadas de forma diferente em diferentes plataformas, como evidenciado por um bug no analisador PNG da Apple.
Riscos de Entrada de Dados
Um risco significativo é a interceptação de conteúdo ou metadados por meio de um Input Method Editor (IME) em dispositivos móveis. Mesmo aplicativos IME padrão podem expor dados por meio de sincronização na nuvem, tradução online ou dicionários pessoais.
Implementamos medidas de mitigação, como enableIMEPersonalizedLearning: false no Cwtch 1.2, mas a solução completa requer ações em nível de sistema operacional e é um desafio contínuo para a segurança móvel.
Servidor Cwtch.
O objetivo do protocolo Cwtch é permitir a comunicação em grupo através de infraestrutura não confiável .
Ao contrário dos esquemas baseados em retransmissão, onde os grupos atribuem um líder, um conjunto de líderes ou um servidor confiável de terceiros para garantir que cada membro do grupo possa enviar e receber mensagens em tempo hábil (mesmo que os membros estejam offline) - infraestrutura não confiável tem o objetivo de realizar essas propriedades sem a suposição de confiança.
O artigo original do Cwtch definia um conjunto de propriedades que se esperava que os servidores Cwtch fornecessem:
• O Cwtch Server pode ser usado por vários grupos ou apenas um.
• Um servidor Cwtch, sem a colaboração de um membro do grupo, nunca deve aprender a identidade dos participantes de um grupo.
• Um servidor Cwtch nunca deve aprender o conteúdo de qualquer comunicação.
• Um servidor Cwtch nunca deve ser capaz de distinguir mensagens como pertencentes a um grupo específico.
Observamos aqui que essas propriedades são um superconjunto dos objetivos de design das estruturas de Recuperação de Informações Privadas.
Melhorias na Eficiência e Segurança
Eficiência do Protocolo
Atualmente, apenas um protocolo conhecido, o PIR ingênuo, atende às propriedades desejadas para garantir a privacidade na comunicação do grupo Cwtch. Este método tem um impacto direto na eficiência da largura de banda, especialmente para usuários em dispositivos móveis. Em resposta a isso, estamos ativamente desenvolvendo novos protocolos que permitem negociar garantias de privacidade e eficiência de maneiras diversas.
Os servidores, no momento desta escrita, permitem o download completo de todas as mensagens armazenadas, bem como uma solicitação para baixar mensagens específicas a partir de uma determinada mensagem. Quando os pares ingressam em um grupo em um novo servidor, eles baixam todas as mensagens do servidor inicialmente e, posteriormente, apenas as mensagens novas.
Mitigação de Análise de Metadados
Essa abordagem permite uma análise moderada de metadados, pois o servidor pode enviar novas mensagens para cada perfil suspeito exclusivo e usar essas assinaturas de mensagens exclusivas para rastrear sessões ao longo do tempo. Essa preocupação é mitigada por dois fatores:
1. Os perfis podem atualizar suas conexões a qualquer momento, resultando em uma nova sessão do servidor.
2. Os perfis podem ser "ressincronizados" de um servidor a qualquer momento, resultando em uma nova chamada para baixar todas as mensagens. Isso é comumente usado para buscar mensagens antigas de um grupo.
Embora essas medidas imponham limites ao que o servidor pode inferir, ainda não podemos garantir resistência total aos metadados. Para soluções futuras para esse problema, consulte Niwl.
Proteção contra Pares Maliciosos
Os servidores enfrentam o risco de spam gerado por pares, representando uma ameaça significativa à eficácia do sistema Cwtch. Embora tenhamos implementado um mecanismo de proteção contra spam no protótipo do Cwtch, exigindo que os pares realizem alguma prova de trabalho especificada pelo servidor, reconhecemos que essa não é uma solução robusta na presença de um adversário determinado com recursos significativos.
Pacotes de Chaves
Os servidores Cwtch se identificam por meio de pacotes de chaves assinados, contendo uma lista de chaves necessárias para garantir a segurança e resistência aos metadados na comunicação do grupo Cwtch. Esses pacotes de chaves geralmente incluem três chaves: uma chave pública do serviço Tor v3 Onion para o Token Board, uma chave pública do Tor v3 Onion Service para o Token Service e uma chave pública do Privacy Pass.
Para verificar os pacotes de chaves, os perfis que os importam do servidor utilizam o algoritmo trust-on-first-use (TOFU), verificando a assinatura anexada e a existência de todos os tipos de chave. Se o perfil já tiver importado o pacote de chaves do servidor anteriormente, todas as chaves são consideradas iguais.
Configuração prévia do aplicativo para ativar o Relé do Cwtch.
No Android, a hospedagem de servidor não está habilitada, pois essa opção não está disponível devido às limitações dos dispositivos Android. Essa funcionalidade está reservada apenas para servidores hospedados em desktops.
No Android, a única forma direta de importar uma chave de servidor é através do grupo de teste Cwtch, garantindo assim acesso ao servidor Cwtch.
Primeiro passo é Habilitar a opção de grupo no Cwtch que está em fase de testes. Clique na opção no canto superior direito da tela de configuração e pressione o botão para acessar as configurações do Cwtch.
Você pode alterar o idioma para Português do Brasil.Depois, role para baixo e selecione a opção para ativar os experimentos. Em seguida, ative a opção para habilitar o chat em grupo e a pré-visualização de imagens e fotos de perfil, permitindo que você troque sua foto de perfil.
https://link.storjshare.io/raw/jvss6zxle26jdguwaegtjdixhfka/production/f0ca039733d48895001261ab25c5d2efbaf3bf26e55aad3cce406646f9af9d15.MP4
Próximo passo é Criar um perfil.
Pressione o + botão de ação no canto inferior direito e selecione "Novo perfil" ou aberta no botão + adicionar novo perfil.
- Selecione um nome de exibição
- Selecione se deseja proteger
este perfil e salvo localmente com criptografia forte:
Senha: sua conta está protegida de outras pessoas que possam usar este dispositivo
Sem senha: qualquer pessoa que tenha acesso a este dispositivo poderá acessar este perfil.
Preencha sua senha e digite-a novamente
Os perfis são armazenados localmente no disco e criptografados usando uma chave derivada de uma senha conhecida pelo usuário (via pbkdf2).
Observe que, uma vez criptografado e armazenado em disco, a única maneira de recuperar um perfil é recuperando a chave da senha - como tal, não é possível fornecer uma lista completa de perfis aos quais um usuário pode ter acesso até inserir um senha.
https://link.storjshare.io/raw/jxqbqmur2lcqe2eym5thgz4so2ya/production/8f9df1372ec7e659180609afa48be22b12109ae5e1eda9ef1dc05c1325652507.MP4
O próximo passo é adicionar o FuzzBot, que é um bot de testes e de desenvolvimento.
Contato do FuzzBot: 4y2hxlxqzautabituedksnh2ulcgm2coqbure6wvfpg4gi2ci25ta5ad.
Ao enviar o comando "testgroup-invite" para o FuzzBot, você receberá um convite para entrar no Grupo Cwtch Test. Ao ingressar no grupo, você será automaticamente conectado ao servidor Cwtch. Você pode optar por sair do grupo a qualquer momento ou ficar para conversar e tirar dúvidas sobre o aplicativo e outros assuntos. Depois, você pode configurar seu próprio servidor Cwtch, o que é altamente recomendável.
https://link.storjshare.io/raw/jvji25zclkoqcouni5decle7if7a/production/ee3de3540a3e3dca6e6e26d303e12c2ef892a5d7769029275b8b95ffc7468780.MP4
Agora você pode utilizar o aplicativo normalmente. Algumas observações que notei: se houver demora na conexão com outra pessoa, ambas devem estar online. Se ainda assim a conexão não for estabelecida, basta clicar no ícone de reset do Tor para restabelecer a conexão com a outra pessoa.
Uma introdução aos perfis Cwtch.
Com Cwtch você pode criar um ou mais perfis . Cada perfil gera um par de chaves ed25519 aleatório compatível com a Rede Tor.
Este é o identificador que você pode fornecer às pessoas e que elas podem usar para entrar em contato com você via Cwtch.
Cwtch permite criar e gerenciar vários perfis separados. Cada perfil está associado a um par de chaves diferente que inicia um serviço cebola diferente.
Gerenciar Na inicialização, o Cwtch abrirá a tela Gerenciar Perfis. Nessa tela você pode:
- Crie um novo perfil.
- Desbloquear perfis.
- Criptografados existentes.
- Gerenciar perfis carregados.
- Alterando o nome de exibição de um perfil.
- Alterando a senha de um perfil
Excluindo um perfil.
- Alterando uma imagem de perfil.
Backup ou exportação de um perfil.
Na tela de gerenciamento de perfil:
1. Selecione o lápis ao lado do perfil que você deseja editar
2. Role para baixo até a parte inferior da tela.
3. Selecione "Exportar perfil"
4. Escolha um local e um nome de arquivo.
5.confirme.
Uma vez confirmado, o Cwtch colocará uma cópia do perfil no local indicado. Este arquivo é criptografado no mesmo nível do perfil.
Este arquivo pode ser importado para outra instância do Cwtch em qualquer dispositivo.
Importando um perfil.
1. Pressione o +botão de ação no canto inferior direito e selecione "Importar perfil"
2. Selecione um arquivo de perfil Cwtch exportado para importar
3. Digite a senha associada ao perfil e confirme.
Uma vez confirmado, o Cwtch tentará descriptografar o arquivo fornecido usando uma chave derivada da senha fornecida. Se for bem-sucedido, o perfil aparecerá na tela Gerenciamento de perfil e estará pronto para uso.
OBSERVAÇÃO
Embora um perfil possa ser importado para vários dispositivos, atualmente apenas uma versão de um perfil pode ser usada em todos os dispositivos ao mesmo tempo.
As tentativas de usar o mesmo perfil em vários dispositivos podem resultar em problemas de disponibilidade e falhas de mensagens.
Qual é a diferença entre uma conexão ponto a ponto e um grupo cwtch?
As conexões ponto a ponto Cwtch permitem que 2 pessoas troquem mensagens diretamente. As conexões ponto a ponto nos bastidores usam serviços cebola Tor v3 para fornecer uma conexão criptografada e resistente a metadados. Devido a esta conexão direta, ambas as partes precisam estar online ao mesmo tempo para trocar mensagens.
Os Grupos Cwtch permitem que várias partes participem de uma única conversa usando um servidor não confiável (que pode ser fornecido por terceiros ou auto-hospedado). Os operadores de servidores não conseguem saber quantas pessoas estão em um grupo ou o que está sendo discutido. Se vários grupos estiverem hospedados em um único servidor, o servidor não conseguirá saber quais mensagens pertencem a qual grupo sem a conivência de um membro do grupo. Ao contrário das conversas entre pares, as conversas em grupo podem ser conduzidas de forma assíncrona, para que todos num grupo não precisem estar online ao mesmo tempo.
Por que os grupos cwtch são experimentais?
Mensagens em grupo resistentes a metadados ainda são um problema em aberto . Embora a versão que fornecemos no Cwtch Beta seja projetada para ser segura e com metadados privados, ela é bastante ineficiente e pode ser mal utilizada. Como tal, aconselhamos cautela ao usá-lo e apenas o fornecemos como um recurso opcional.
Como posso executar meu próprio servidor Cwtch?
A implementação de referência para um servidor Cwtch é de código aberto . Qualquer pessoa pode executar um servidor Cwtch, e qualquer pessoa com uma cópia do pacote de chaves públicas do servidor pode hospedar grupos nesse servidor sem que o operador tenha acesso aos metadados relacionados ao grupo .
https://git.openprivacy.ca/cwtch.im/server
https://docs.openprivacy.ca/cwtch-security-handbook/server.html
Como posso desligar o Cwtch?
O painel frontal do aplicativo possui um ícone do botão "Shutdown Cwtch" (com um 'X'). Pressionar este botão irá acionar uma caixa de diálogo e, na confirmação, o Cwtch será desligado e todos os perfis serão descarregados.
Suas doações podem fazer a diferença no projeto Cwtch? O Cwtch é um projeto dedicado a construir aplicativos que preservam a privacidade, oferecendo comunicação de grupo resistente a metadados. Além disso, o projeto também desenvolve o Cofre, formulários da web criptografados para ajudar mútua segura. Suas contribuições apoiam iniciativas importantes, como a divulgação de violações de dados médicos em Vancouver e pesquisas sobre a segurança do voto eletrônico na Suíça. Ao doar, você está ajudando a fechar o ciclo, trabalhando com comunidades marginalizadas para identificar e corrigir lacunas de privacidade. Além disso, o projeto trabalha em soluções inovadoras, como a quebra de segredos através da criptografia de limite para proteger sua privacidade durante passagens de fronteira. E também tem a infraestrutura: toda nossa infraestrutura é open source e sem fins lucrativos. Conheça também o Fuzzytags, uma estrutura criptográfica probabilística para marcação resistente a metadados. Sua doação é crucial para continuar o trabalho em prol da privacidade e segurança online. Contribua agora com sua doação
https://openprivacy.ca/donate/
onde você pode fazer sua doação em bitcoin e outras moedas, e saiba mais sobre os projetos.
https://openprivacy.ca/work/
Link sobre Cwtch
https://cwtch.im/
https://git.openprivacy.ca/cwtch.im/cwtch
https://docs.cwtch.im/docs/intro
https://docs.openprivacy.ca/cwtch-security-handbook/
Baixar #CwtchDev
cwtch.im/download/
https://play.google.com/store/apps/details?id=im.cwtch.flwtch
-
Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
<iframe width="560" height="315" src="https://www.youtube.com/embed/fehgwnSUcqQ?si=nPK7UOFr19BT5ifm" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
### 1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
### 2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
### 3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
### Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
### Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
<iframe width="560" height="315" src="https://www.youtube.com/embed/BkNqu51et9U?si=lne0jWxdrZPxSgd1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
## The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
## LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
```
create table if not exists conversation (
id varchar(36) primary key not null,
startDate real,
endDate real,
summary text,
vectorSummary F32_BLOB(512)
);
```
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
```
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance
from conversation
where
${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''}
${distance ? `and distance <= ${distance}` : ''}
order by distance
limit ${top};
```
**vector\_distance\_cos** calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
```
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
```
**vector\_top\_k** is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
## Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is **post-filtering**. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
![](https://miro.medium.com/v2/resize:fit:1400/1*WFR7oJIwYflxiUivSrm-5g.png)
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
```
const sqlQuery = `
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance
from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
inner join conversation c on i.id = c.rowid
where
${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''}
${distance ? `and distance <= ${distance}` : ''}
order by distance
limit ${top};
```
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector\_top\_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
## Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
![](https://miro.medium.com/v2/resize:fit:1400/1*TioR1ETZ2S_FxvW6rZx9nw.png)
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
## Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
```
WITH FilteredDates AS (
SELECT
c.id,
c.startDate,
c.endDate,
c.summary,
c.vectorSummary
FROM
YourTable c
WHERE
${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''}
),
DistanceCalculation AS (
SELECT
fd.id,
fd.startDate,
fd.endDate,
fd.summary,
fd.vectorSummary,
vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance
FROM
FilteredDates fd
)
SELECT
dc.id,
dc.startDate,
dc.endDate,
dc.summary,
dc.distance
FROM
DistanceCalculation dc
WHERE
1=1
${distance ? `AND dc.distance <= ${distance}` : ''}
ORDER BY
dc.distance
LIMIT ${top};
```
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
![](https://miro.medium.com/v2/resize:fit:1400/1*NewbEonzeEgS7qPOZRgXwg.png)
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
## Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
## **Conclusion**
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
### **OBJECTIVES**
To establish a guideline for the management of Acute Community-Acquired Pneumonia (CAP) in our center, for both outpatient and hospitalized patients, with the aim of:
* Reducing morbidity and mortality associated with the condition.
* Improving the quality of medical care and optimizing hospital resources.
* Delaying the progression of antimicrobial resistance.
### **SCOPE**
All patients over 16 years of age diagnosed with Acute Community-Acquired Pneumonia who are being followed by our institution in an outpatient or inpatient setting.
### **RESPONSIBILITIES**
Physicians from the Medical Clinic, Medical Emergency, Coronary Unit, and Intensive Care Service. Nursing Coordination. Pharmacy Service. Infection Control Committee.
### **REFERENCES AND BIBLIOGRAPHY**
1. Community-Acquired Pneumonia in Adults. Recommendations for its management. Lopardo et al. MEDICINA (Buenos Aires) 2015; 75: 245-257. Argentine Society of Infectiology. ISSN 0025-7680
2. Diagnosis and Treatment of Adults with Community-acquired Pneumonia. An Official Clinical Practice Guideline of the American Thoracic Society and Infectious Diseases Society of America. 2019. American Journal of Respiratory and Critical Care Medicine Volume 200 Number 7 | October 1, 2019. DOI: 10.1164 rccm.201908-1581ST
3. ERS/ESICM/ESCMID/ALAT guidelines for the management of severe community-acquired pneumonia. Intensive Care Med (2023) 49:615–632 https://doi.org/10.1007/s00134-023-07033-8
4. Antimicrobial resistance. WHO. https://www.who.int/news-room/fact-sheets/detail/antimicrobial resistance
5. Internal Medicine. Farreras-Rosman. Volume I. Elsevier. 2008 Edition.
6. Considerations for the Responsible Use of Antibiotics in COVID-19. Argentine Society of Infectiology. 2020. https://drive.google.com/file/d/1BmXD5x6rEpSqDIc8urccdqLcZKkP3U7X/view
7. Penicillin Allergy. Castells M. New England Journal of Medicine, 381(24), 2338–2351. doi:10.1056 nejmra1807761
### **INTRODUCTION**
Pneumonia is one of the leading causes of morbidity and mortality worldwide, affecting patients of all ages and with various risk factors. Proper management in both outpatient and hospital settings is crucial for improving clinical outcomes and reducing associated complications.
This document aims to standardize and optimize the treatment of pneumonia based on the most current evidence and recommendations from leading scientific organizations. It seeks to be a practical tool for healthcare professionals, providing a clear and concise approach to the diagnosis, treatment, and follow-up of patients with pneumonia.
### **FOUNDATIONS. HOSPITAL SITUATION ANALYSIS:**
* Pneumonias represent a significant burden on the healthcare system due to their high prevalence and potential severity, underscoring the need for a standardized approach.
* A clinical guideline facilitates decision-making, ensuring that all healthcare professionals follow a uniform protocol that integrates best practices, thereby reducing variability in treatments. This allows for better resource utilization, optimizing antibiotic use and reducing the emergence of antimicrobial resistance.
* Antimicrobial resistance has been proposed by the World Health Organization (WHO) and related organizations as the leading cause of death and hospital expenditure by the year 2050.
* Pneumonias in our center, in their various presentations, have shown significant prevalence in hospitalizations according to measurements taken in 2024.
* In our center, antibiotics, as a whole, have been the main source of financial losses related to drugs during the billing cycle from June 2023 to July 2024.
### **EPIDEMIOLOGICAL SITUATION:**
Pneumonias represent a global incidence of 1.26 cases per 1000 inhabitants. It has been documented in some centers that this incidence can increase in patients over 65 years of age, representing 34 cases per 1000 inhabitants. Outpatient mortality varies between 0.1% and 5%, but can reach up to 50% in hospitalized patients, especially those requiring Intensive Care Unit stay.
The main risk factors for developing pneumonia are:
* Chronic Heart Disease.
* Chronic Respiratory Disease.
* Chronic Kidney Disease.
* Advanced-stage HIV infection.
* Immunosuppressed. Solid Organ Transplant. Hematopoietic Stem Cell Transplantation.
* Diabetes mellitus.
* Neoplasms.
* Smoking.
* Chronic use of Corticosteroids or Proton Pump Inhibitors.
* Multiple Myeloma and Hypogammaglobulinemia.
* Anatomical or Functional Asplenia.
The main causative agents of acute community-acquired pneumonia in our setting are:
* Respiratory Viruses (Influenza, SARS-CoV2, RSV).
* Streptococcus pneumoniae.
* Haemophilus influenzae.
* Staphylococcus aureus.
* Mycoplasma pneumoniae and Chlamydophila pneumoniae.
It should be noted that Streptococcus pneumoniae shows a good sensitivity pattern to penicillin and continues to be the most frequent causative microorganism. Haemophilus influenzae only shows beta-lactamase production in 10% to 23% of cases. Staphylococcus aureus in our setting has a low incidence of methicillin resistance, although this possibility should be considered in certain situations and severe clinical presentations. Given these considerations, beta-lactams remain the first-line treatment.
Regarding Pseudomonas aeruginosa isolates, they will only be relevant in patients with risk factors such as bronchiectasis, cystic fibrosis, prior treatment with corticosteroids, or broad-spectrum antibiotics.
Emerging pathogens of some relevance include the eventual emergence of cases caused by Leptospira interrogans, Legionella pneumophila, and Hantavirus. These cases should always be associated with a specific epidemiological link.
### **DIAGNOSIS**
The diagnosis of pneumonia is based on clinical and imaging criteria. For the diagnosis of Acute Community-Acquired Pneumonia, we will consider:
Symptoms and Clinical Signs (at least 1 of the following):
* Fever.
* Altered general condition.
* Cough.
* Sputum production.
* Chest pain.
* Dyspnea.
* Hemoptysis.
plus
Radiopacity on Chest X-ray (Alveolar consolidation with or without air bronchogram, interstitial pattern, bronchiectasis, cavitation, pleural effusion, new radiopacity, etc.). It is always recommended to request both frontal and lateral views.
Chest CT remains a method with greater sensitivity and specificity for evaluating lung parenchyma compared to conventional X-ray in infectious pathology. However, a simple chest X-ray is an adequate method for the initial evaluation of the condition and its complications, which is why a CT scan is not recommended as an initial method for evaluating pneumonia and should always be preceded by a conventional chest X-ray.
CT studies should be considered in the following situations:
* Respiratory failure.
* Evaluation or suspicion of differential diagnoses to Acute Community-Acquired Pneumonia.
* Evaluation or suspicion of complications of Acute Community-Acquired Pneumonia.
* Evaluation of radiological patterns that are not entirely clear on the chest X-ray.
### **CHOICE OF CARE SITE AND TREATMENT**
For the choice of care site and treatment of pneumonia, it is recommended to complement clinical criteria with validated mortality scores associated with risk factors and clinical status.
CURB-65 (1 point for each item):
* Confusion
* Elevated urea greater than 90 mg/dl
* Respiratory rate greater than 30/minute
* Systolic blood pressure < 90 mmHg or diastolic blood pressure < 60 mmHg
* Age equal to or greater than 65 years
Results:
* Groups 0 to 1: Outpatient management.
* Groups 1-2: Admission to General Ward.
* Groups 3-5: Admission to Intensive Care Unit.
* Appendix: A pulse oximetry reading of less than 92% is recommended as an independent factor for inpatient management under expert recommendation to complement the score.
### **COMPLEMENTARY STUDIES AND CULTURE SAMPLING**
Once the diagnosis is completed, the patient's risk stratification and the choice of admission site are made, the following complementary studies and culture sampling are recommended to proceed with the patient's study during treatment.
Outpatient patient:
* Pulse Oximetry.
* Laboratory routine (complete blood count, glucose, urea, creatinine, liver function tests).
Inpatient patient in general ward:
* Pulse Oximetry.
* Laboratory routine (complete blood count, glucose, urea, creatinine, liver function tests). Acid-base status if pulse oximetry is less than 92%.
* Sputum sample (Gram stain, culture, antibiogram).
* Blood cultures.
* In the presence of pleural effusion: Thoracentesis. Physical-chemical study for Light's Criteria. Direct and Culture of Pleural Fluid.
Inpatient patient in intensive care unit:
* Pulse Oximetry.
* Laboratory routine (complete blood count, glucose, urea, creatinine, liver function tests) plus acid-base status.
* Sputum sample (Gram stain, culture, antibiogram). Tracheal aspirate, Mini-BAL, or BAL sampling for patients requiring ARM upon admission.
* Blood cultures.
* Urinary antigen for detection of Streptococcus pneumoniae, if available in microbiology.
* In the presence of pleural effusion: Thoracentesis. Physical-chemical study for Light's Criteria. Direct and Culture of Pleural Fluid.
Special considerations for Viral Pneumonias:
* We recommend performing a viral panel for Influenza A/B for any pneumonia presenting at least 1 risk factor mentioned during periods of viral circulation in the community.
* We recommend performing a viral panel for SARS-CoV2 for any pneumonia presenting at least 1 risk factor mentioned during periods of viral circulation in the community or having epidemiological criteria of a suspected COVID-19 case.
* The Infection Control Committee will timely inform based on the National Epidemiological Bulletin about the presence of circulating respiratory viruses in our setting.
Special considerations for Atypical Pneumonias and HIV Testing:
* We recommend serological testing for IgM/IgG for Chlamydia and Mycoplasma for any pneumonia presenting a subacute evolution at the time of clinical presentation or clinical-radiological dissociation in its presentation.
* In the suspicion of pneumonia caused by emerging pathogens (Legionella pneumophila, Leptospira interrogans, Hantavirus), consider the necessary epidemiological link as a prior epidemiological background before requesting specific diagnostic tests.
* HIV testing is recommended for all pneumonias, with special emphasis on those that do not present the conventional risk factors mentioned.
### **ANTIMICROBIAL TREATMENT AND DURATION OF TREATMENT:**
Directed antimicrobial treatment will be based on the present risk factors and the choice of care site and treatment.
**Outpatient patient <65 years and without risk factors:**
First choice:
* Amoxicillin 875mg/12h orally for 5-7 days.
Scheme for history of allergy to Beta-Lactams:
* Clarithromycin 500mg/12h orally for 5 days or
* Azithromycin 500-1000mg/day for 5 days.
**Outpatient patient >65 years or with at least 1 risk factor:**
First choice:
* Amoxicillin-Clavulanate 1g/12h orally for 7 days.
Scheme for history of allergy to Beta-Lactams:
* Clarithromycin 500mg/12h orally for 5 days or
* Azithromycin 500-1000mg/day orally for 5 days.
**Inpatient patient in General Ward <65 years and without risk factors:**
First choice:
* Ampicillin-Sulbactam 1.5g/6h IV +/- Clarithromycin 500mg/12h orally/IV for 5-7 days.
Scheme for history of allergy to Beta-Lactams:
* Ceftriaxone 1g/day IV for 5-7 days.
**Inpatient patient in General Ward >65 years or with at least 1 risk factor:**
First choice:
* Ampicillin-Sulbactam 1.5g/6h IV for 7 days +/- Clarithromycin 500mg/12h orally/IV for 5 days.
Scheme for history of allergy to Beta-Lactams:
* Ceftriaxone 1g/day IV for 7 days.
**Inpatient patient in Intensive Care Unit:**
First choice:
* Ampicillin-Sulbactam 1.5g/6h IV for 7 days +/- Clarithromycin 500mg/12h orally/IV for 5 days.
Scheme for history of allergy to Beta-Lactams:
* Ceftriaxone 1-2g/day IV for 7 days.
**Special Considerations for Inpatients:**
Scheme for risk factors for Pseudomonas aeruginosa*:
* Piperacillin/Tazobactam 4.5g/6h IV or Cefepime 2g/8h IV for 7 days +/- Clarithromycin 500mg/12h orally/IV for 5 days.
Scheme for risk factors for Methicillin-Resistant Staphylococcus aureus**:
* Add to conventional scheme: Vancomycin 15-20mg/kg/8-12h IV +/- Clindamycin 600mg/8h IV for 7-14 days.
**Aspiration Pneumonia:**
First choice:
* Ampicillin-Sulbactam 1.5g/6h IV for 5-7 days.
*Risk factors for Pseudomonas aeruginosa: Bronchiectasis, cystic fibrosis, prior treatment with corticosteroids or broad-spectrum antibiotics. Documented isolates in respiratory cultures of Pseudomonas aeruginosa.
**Risk factors for Methicillin-Resistant Staphylococcus aureus: Previously healthy young patients with severe, necrotizing, and rapidly progressive pneumonia, cavitary infiltrates, hemoptysis, prior influenza, intravenous drug users, rash, leukopenia, recent or concomitant skin and soft tissue infections.
The routine use of corticosteroids in pneumonia is not recommended.
### **CONSIDERATIONS ON ANTIMICROBIALS IN VIRAL AND ATYPICAL PNEUMONIAS:**
In the case of a concomitant antigen test or PCR for Influenza A/B or SARS-CoV2, the following treatment recommendations are made:
**Influenza Virus A/B:**
First choice:
* Oseltamivir 75mg every 12 hours orally for 5 days.
Other considerations:
* In cases of Respiratory Failure in ARM or Obesity: Oseltamivir 150mg every 12 hours orally for 5 days.
* Concomitant antimicrobial treatment is recommended as there is documented frequent association of Influenza Virus and Streptococcus pneumoniae.
**COVID-19:**
* First choice is conventional treatment with dexamethasone 8 mg IV for 10 days in the event of respiratory failure.
* Routine antimicrobial treatment is not recommended for COVID-19; therefore, upon a positive SARS-CoV2 test, it is recommended to discontinue antimicrobials.
Consider maintaining concomitant antimicrobial treatment only in suspected bacterial infection due to severe presentation:
* Focal alveolar consolidation +/- air bronchogram in imaging studies plus 1 of the following: sepsis, risk factors, and/or immunosuppression.
**Atypical Pneumonias with Seroconversion for Chlamydia or Mycoplasma:**
First choice:
* Clarithromycin 500mg every 12 hours IV/orally for 14 days.
* Azithromycin 500-1000mg/day IV/orally for 14 days.
* Doxycycline 100mg every 12 hours IV/orally for 14 days.
### **CONSIDERATIONS ON PENICILLIN AND OTHER BETA-LACTAM ALLERGIES:**
Patients who report penicillin allergy are often misclassified. It is documented that more than 95% of patients who report penicillin allergy can receive beta-lactams without any complications. Additionally, penicillin hypersensitivity diminishes over the years.
Allergy to one beta-lactam does not imply the impossibility of using the entire spectrum of beta-lactams, as there are only a few cases of cross-hypersensitivity.
Therefore, we recommend the safe use of beta-lactams except in cases of a reported or documented history of severe allergy to penicillin (anaphylaxis).
In doubtful cases or confirmed allergy events during hospitalization, a consultation with an Allergy Specialist is available to evaluate the case.
### **FOLLOW-UP IN OUTPATIENT TREATMENT MODALITY**
Patients undergoing pneumonia treatment in an outpatient setting can continue their treatment at home, considering advising them to seek further consultation in case of alarm signs (fever that does not subside after 48 to 72 hours, dyspnea, hemoptysis, chest pain, etc.). Nevertheless, it is good practice to consider a follow-up consultation in the emergency department or clinic after 48 to 72 hours of starting antibiotic therapy.
It is not routinely recommended to repeat a chest X-ray or CT scan to evaluate the evolution of pneumonia under outpatient treatment. Only in the case of suspected complications or unfavorable evolution. A follow-up at the end of treatment with the primary care physician is suggested.
### **FOLLOW-UP IN INPATIENT TREATMENT MODALITY**
For hospitalized patients, we should consider transitioning from parenteral medication to oral when the following conditions are met
* Completion of 48 hours of parenteral treatment.
* Presence of a 24-hour afebrile period, with hemodynamic stability and significant clinical improvement.
* Availability of the oral route.
It is not routinely recommended to repeat a chest X-ray or CT scan to evaluate the evolution of pneumonia under outpatient or inpatient treatment. Only in the case of suspected complications or unfavorable evolution.
### **PREVENTION**
The prevention of pneumonia is based on timely immunization with pneumococcal vaccines, influenza vaccination, and COVID-19 vaccination according to the immunization recommendations and current schedule from the Ministry of Health.
### **ICD-11 CODING**
* CA40 - Pneumonia.
* CA40.0 - Bacterial Pneumonia.
* CA40.1 - Viral Bronchopneumonia.
* CA40.2 - Fungal Pneumonia.
* CA40.Z - Pneumonia, organism unspecified.
### Autor
**Kamo Weasel - MD Infectious Diseases - MD Internal Medicine - #DocChain Community**
npub1jdvvva54m8nchh3t708pav99qk24x6rkx2sh0e7jthh0l8efzt7q9y7jlj
-
-
After months of development I am excited to officially announce the first version of DVMDash (v0.1). DVMDash is a monitoring and debugging tool for all Data Vending Machine (DVM) activity on Nostr. The website is live at [https://dvmdash.live](https://dvmdash.live) and the code is available on [Github](https://github.com/dtdannen/dvmdash).
Data Vending Machines ([NIP-90](https://github.com/nostr-protocol/nips/blob/master/90.md)) offload computationally expensive tasks from relays and clients in a decentralized, free-market manner. They are especially useful for AI tools, algorithmic processing of user’s feeds, and many other use cases.
The long term goal of DVMDash is to become 1) a place to easily see what’s happening in the DVM ecosystem with metrics and graphs, and 2) provide real-time tools to help developers monitor, debug, and improve their DVMs.
DVMDash aims to enable users to answer these types of questions at a glance:
* What’s the most popular DVM right now?
* How much money is being paid to image generation DVMs?
* Is any DVM down at the moment? When was the last time that DVM completed a task?
* Have any DVMs failed to deliver after accepting payment? Did they refund that payment?
* How long does it take this DVM to respond?
* For task X, what’s the average amount of time it takes for a DVM to complete the task?
* … and more
For developers working with DVMs there is now a visual, graph based tool that shows DVM-chain activity. DVMs have already started calling other DVMs to assist with work. Soon, we will have humans in the loop monitoring DVM activity, or completing tasks themselves. The activity trace of which DVM is being called as part of a sub-task from another DVM will become complicated, especially because these decisions will be made at run-time and are not known ahead of time. Building a tool to help users and developers understand where a DVM is in this activity trace, whether it’s gotten stuck or is just taking a long time, will be invaluable. *For now, the website only shows 1 step of a dvm chain from a user's request.*
One of the main designs for the site is that it is highly _clickable_, meaning whenever you see a DVM, Kind, User, or Event ID, you can click it and open that up in a new page to inspect it.
Another aspect of this website is that it should be fast. If you submit a DVM request, you should see it in DVMDash within seconds, as well as events from DVMs interacting with your request. I have attempted to obtain DVM events from relays as quickly as possible and compute metrics over them within seconds.
This project makes use of a nosql database and graph database, currently set to use mongo db and neo4j, for which there are free, community versions that can be run locally.
Finally, I’m grateful to nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f for supporting this project.
## Features in v0.1:
### Global Network Metrics:
This page shows the following metrics:
- **DVM Requests:** Number of unencrypted DVM requests (kind 5000-5999)
- **DVM Results:** Number of unencrypted DVM results (kind 6000-6999)
- **DVM Request Kinds Seen:** Number of unique kinds in the Kind range 5000-5999 (except for known non-DVM kinds 5666 and 5969)
- **DVM Result Kinds Seen:** Number of unique kinds in the Kind range 6000-6999 (except for known non-DVM kinds 6666 and 6969)
- **DVM Pub Keys Seen:** Number of unique pub keys that have written a kind 6000-6999 (except for known non-DVM kinds) or have published a kind 31990 event that specifies a ‘k’ tag value between 5000-5999
- **DVM Profiles (NIP-89) Seen**: Number of 31990 that have a ‘k’ tag value for kind 5000-5999
- **Most Popular DVM**: The DVM that has produced the most result events (kind 6000-6999)
- **Most Popular Kind**: The Kind in range 5000-5999 that has the most requests by users.
- **24 hr DVM Requests**: Number of kind 5000-5999 events created in the last 24 hrs
- **24 hr DVM Results**: Number of kind 6000-6999 events created in the last 24 hours
- **1 week DVM Requests**: Number of kind 5000-5999 events created in the last week
- **1 week DVM Results**: Number of kind 6000-6999 events created in the last week
- **Unique Users of DVMs**: Number of unique pubkeys of kind 5000-5999 events
- **Total Sats Paid to DVMs**:
- This is an estimate.
- This value is likely a lower bound as it does not take into consideration subscriptions paid to DVMs
- This is calculated by counting the values of all invoices where:
- A DVM published a kind 7000 event requesting payment and containing an invoice
- The DVM later provided a DVM Result for the same job for which it requested payment.
- The assumption is that the invoice was paid, otherwise the DVM would not have done the work
- Note that because there are multiple ways to pay a DVM such as lightning invoices, ecash, and subscriptions, there is no guaranteed way to know whether a DVM has been paid. Additionally, there is no way to know that a DVM completed the job because some DVMs may not publish a final result event and instead send the user a DM or take some other kind of action.
### Recent Requests:
This page shows the most recent 3 events per kind, sorted by created date. You should always be able to find the last 3 events here of all DVM kinds.
### DVM Browser:
This page will either show a profile of a specific DVM, or when no DVM is given in the url, it will show a table of all DVMs with some high level stats. Users can click on a DVM in the table to load the DVM specific page.
### Kind Browser:
This page will either show data on a specific kind including all DVMs that have performed jobs of that kind, or when no kind is given, it will show a table summarizing activity across all Kinds.
### Debug:
This page shows the graph based visualization of all events, users, and DVMs involved in a single job as well as a table of all events in order from oldest to newest. When no event is given, this page shows the 200 most recent events where the user can click on an event in order to debug that job. The graph-based visualization allows the user to zoom in and out and move around the graph, as well as double click on any node in the graph (except invoices) to open up that event, user, or dvm in a new page.
### Playground:
This page is currently under development and may not work at the moment. If it does work, in the current state you can login with NIP-07 extension and broadcast a 5050 event with some text and then the page will show you events from DVMs. This page will be used to interact with DVMs live. A current good alternative to this feature, for some but not all kinds, is [https://vendata.io/](https://vendata.io/).
## Looking to the Future
I originally built DVMDash out of Fear-of-Missing-Out (FOMO); I wanted to make AI systems that were comprised of DVMs but my day job was taking up a lot of my time. I needed to know when someone was performing a new task or launching a new AI or Nostr tool!
I have a long list of DVMs and Agents I hope to build and I needed DVMDash to help me do it; I hope it helps you achieve your goals with Nostr, DVMs, and even AI. To this end, I wish for this tool to be useful to others, so if you would like a feature, please [submit a git issue here](https://github.com/dtdannen/dvmdash/issues/new) or _note_ me on Nostr!
### Immediate Next Steps:
- Refactoring code and removing code that is no longer used
- Improve documentation to run the project locally
- Adding a metric for number of encrypted requests
- Adding a metric for number of encrypted results
### Long Term Goals:
- Add more metrics based on community feedback
- Add plots showing metrics over time
- Add support for showing a multi-dvm chain in the graph based visualizer
- Add a real-time mode where the pages will auto update (currently the user must refresh the page)
- ... Add support for user requested features!
## Acknowledgements
There are some fantastic people working in the DVM space right now. Thank you to nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet for making python bindings for nostr_sdk and for the recent asyncio upgrades! Thank you to nostr:npub1nxa4tywfz9nqp7z9zp7nr7d4nchhclsf58lcqt5y782rmf2hefjquaa6q8 for answering lots of questions about DVMs and for making the nostrdvm library. Thank you to nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft for making the original DVM NIP and [vendata.io](https://vendata.io/) which I use all the time for testing!
P.S. I rushed to get this out in time for Nostriga 2024; code refactoring will be coming :)
-
On social media and in the Nostr space in particular, there’s been a lot of debate about the idea of supporting deletion and editing of notes.
Some people think they’re vital features to have, others believe that more honest and healthy social media will come from getting rid of these features. The discussion about these features quickly turns to the feasibility of completely deleting something on a decentralized protocol. We quickly get to the “We can’t really delete anything from the internet, or a decentralized network.” argument. This crowds out how Delete and Edit can mimic elements of offline interactions, how they can be used as social signals.
When it comes to issues of deletion and editing content, what matters more is if the creator can communicate their intentions around their content. Sure, on the internet, with decentralized protocols, there’s no way to be sure something’s deleted. It’s not like taking a piece of paper and burning it. Computers make copies of things all the time, computers don’t like deleting things. In particular, distributed systems tend to use a Kafka architecture with immutable logs, it’s just easier to keep everything around, as deleting and reindexing is hard. Even if the software could be made to delete something, there’s always screenshots, or even pictures of screens. We can’t provably make something disappear.
What we need to do in our software is clearly express intention. A delete is actually a kind of retraction. “I no longer want to associate myself with this content, please stop showing it to people as part of what I’ve published, stop highlighting it, stop sharing it.” Even if a relay or other server keeps a copy, and keeps sharing it, being able to clearly state “hello world, this thing I said, was a mistake, please get rid of it.” Just giving users the chance to say “I deleted this” is a way of showing intention. It’s also a way of signaling that feedback has been heard. Perhaps the post was factually incorrect or perhaps it was mean and the person wants to remove what they said. In an IRL conversation, for either of these scenarios there is some dialogue where the creator of the content is learning something and taking action based on what they’ve learned.
Without delete or edit, there is no option to signal to the rest of the community that you have learned something because of how the content is structured today. On most platforms a reply or response stating one’s learning will be lost often in a deluge of replies on the original post and subsequent posts are often not seen especially when the original goes viral. By providing tools like delete and edit we give people a chance to signal that they have heard the feedback and taken action.
The Nostr Protocol supports delete and expiring notes. It was one of the reasons we switched from secure scuttlebutt to build on Nostr. Our nos.social app offers delete and while we know that not all relays will honor this, we believe it’s important to provide social signaling tools as a means of making the internet more humane.
We believe that the power to learn from each other is more important than the need to police through moral outrage which is how the current platforms and even some Nostr clients work today.
It’s important that we don’t say Nostr doesn’t support delete. Not all apps need to support requesting a delete, some might want to call it a retraction. It is important that users know there is no way to enforce a delete and not all relays may honor their request.
Edit is similar, although not as widely supported as delete. It’s a creator making a clear statement that they’ve created a new version of their content. Maybe it’s a spelling error, or a new version of the content, or maybe they’re changing it altogether. Freedom online means freedom to retract a statement, freedom to update a statement, freedom to edit your own content. By building on these freedoms, we’ll make Nostr a space where people feel empowered and in control of their own media.
-
# Embracing AI: A Case for AI Accelerationism
In an era where artificial intelligence (AI) development is at the forefront of technological innovation, a counter-narrative championed by a group I refer to as the 'AI Decels'—those advocating for the deceleration of AI advancements— seems to be gaining significant traction. After tuning into a recent episode of the [Joe Rogan Podcast](https://fountain.fm/episode/0V35t9YBkOMVM4WRVLYp), I realized that the prevailing narrative around AI was heading in a dangerous direction. Rogan had Aza Raskin and Tristan Harris, technology safety advocates, who released a talk called '[The AI Dilemma](https://www.youtube.com/watch?v=xoVJKj8lcNQ),' on for a discussion. You may know them from the popular documentary '[The Social Dilemma](https://www.thesocialdilemma.com/)' on the dangers of social media. It became increasingly clear that the cautionary stance dominating this discourse might be tipping the scales too far, veering towards an over-regulated future that stifles innovation rather than fostering it.
![](8046488-1703007156335-9e4d055bcadad.jpg)
## Are we moving too fast?
While acknowledging AI's benefits, Aza and Tristan fear it could be dangerous if not guided by ethical standards and safeguards. They believe AI development is moving too quickly and that the right incentives for its growth are not in place. They are concerned about the possibility of "civilizational overwhelm," where advanced AI technology far outpaces 21st-century governance. They fear a scenario where society and its institutions cannot manage or adapt to the rapid changes and challenges introduced by AI.
They argue for regulating and slowing down AI development due to rapid, uncontrolled advancement driven by competition among companies like Google, OpenAI, and Microsoft. They claim this race can lead to unsafe releases of new technologies, with AI systems exhibiting unpredictable, emergent behaviors, posing significant societal risks. For instance, AI can inadvertently learn tasks like sentiment analysis or human emotion understanding, creating potential for misuse in areas like biological weapons or cybersecurity vulnerabilities.
Moreover, AI companies' profit-driven incentives often conflict with the public good, prioritizing market dominance over safety and ethics. This misalignment can lead to technologies that maximize engagement or profits at societal expense, similar to the negative impacts seen with social media. To address these issues, they suggest government regulation to realign AI companies' incentives with safety, ethical considerations, and public welfare. Implementing responsible development frameworks focused on long-term societal impacts is essential for mitigating potential harm.
## This isn't new
Though the premise of their concerns seems reasonable, it's dangerous and an all too common occurrence with the emergence of new technologies. For example, in their example in the podcast, they refer to the technological breakthrough of oil. Oil as energy was a technological marvel and changed the course of human civilization. The embrace of oil — now the cornerstone of industry in our age — revolutionized how societies operated, fueled economies, and connected the world in unprecedented ways. Yet recently, as ideas of its environmental and geopolitical ramifications propagated, the narrative around oil has shifted.
Tristan and Aza detail this shift and claim that though the period was great for humanity, we didn't have another technology to go to once the technological consequences became apparent. The problem with that argument is that we did innovate to a better alternative: nuclear. However, at its technological breakthrough, it was met with severe suspicions, from safety concerns to ethical debates over its use. This overregulation due to these concerns caused a decades-long stagnation in nuclear innovation, where even today, we are still stuck with heavy reliance on coal and oil. The scare tactics and fear-mongering had consequences, and, interestingly, they don't see the parallels with their current deceleration stance on AI.
These examples underscore a critical insight: the initial anxiety surrounding new technologies is a natural response to the unknowns they introduce. Yet, history shows that too much anxiety can stifle the innovation needed to address the problems posed by current technologies. The cycle of discovery, fear, adaptation, and eventual acceptance reveals an essential truth—progress requires not just the courage to innovate but also the resilience to navigate the uncertainties these innovations bring.
Moreover, believing we can predict and plan for all AI-related unknowns reflects overconfidence in our understanding and foresight. History shows that technological progress, marked by unexpected outcomes and discoveries, defies such predictions. The evolution from the printing press to the internet underscores progress's unpredictability. Hence, facing AI's future requires caution, curiosity, and humility. Acknowledging our limitations and embracing continuous learning and adaptation will allow us to harness AI's potential responsibly, illustrating that embracing our uncertainties, rather than pretending to foresee them, is vital to innovation.
The journey of technological advancement is fraught with both promise and trepidation. Historically, each significant leap forward, from the dawn of the industrial age to the digital revolution, has been met with a mix of enthusiasm and apprehension. Aza Raskin and Tristan Harris's thesis in the 'AI Dilemma' embodies the latter.
## Who defines "safe?"
When slowing down technologies for safety or ethical reasons, the issue arises of who gets to define what "safe" or “ethical” mean? This inquiry is not merely technical but deeply ideological, touching the very core of societal values and power dynamics. For example, the push for Diversity, Equity, and Inclusion (DEI) initiatives shows how specific ideological underpinnings can shape definitions of safety and decency.
Take the case of the initial release of Google's AI chatbot, Gemini, which chose the ideology of its creators over truth. Luckily, the answers were so ridiculous that the pushback was sudden and immediate. My worry, however, is if, in correcting this, they become experts in making the ideological capture much more subtle. Large bureaucratic institutions' top-down safety enforcement creates a fertile ground for ideological capture of safety standards.
![](Screenshot%202024-02-27%20at%207.26.46%E2%80%AFPM.png)
I claim that the issue is not the technology itself but the lens through which we view and regulate it. Suppose the gatekeepers of 'safety' are aligned with a singular ideology. In that case, AI development would skew to serve specific ends, sidelining diverse perspectives and potentially stifling innovative thought and progress.
In the podcast, Tristan and Aza suggest such manipulation as a solution. They propose using AI for consensus-building and creating "shared realities" to address societal challenges. In practice, this means that when individuals' viewpoints seem to be far apart, we can leverage AI to "bridge the gap." How they bridge the gap and what we would bridge it toward is left to the imagination, but to me, it is clear. Regulators will inevitably influence it from the top down, which, in my opinion, would be the opposite of progress.
In navigating this terrain, we must advocate for a pluralistic approach to defining safety, encompassing various perspectives and values achieved through market forces rather than a governing entity choosing winners. The more players that can play the game, the more wide-ranging perspectives will catalyze innovation to flourish.
## Ownership & Identity
Just because we should accelerate AI forward does not mean I do not have my concerns. When I think about what could be the most devastating for society, I don't believe we have to worry about a Matrix-level dystopia; I worry about freedom. As I explored in "[Whose data is it anyway?](https://cwilbzz.com/whose-data-is-it-anyway/)," my concern gravitates toward the issues of data ownership and the implications of relinquishing control over our digital identities. This relinquishment threatens our privacy and the integrity of the content we generate, leaving it susceptible to the inclinations and profit of a few dominant tech entities.
To counteract these concerns, a paradigm shift towards decentralized models of data ownership is imperative. Such standards would empower individuals with control over their digital footprints, ensuring that we develop AI systems with diverse, honest, and truthful perspectives rather than the massaged, narrow viewpoints of their creators. This shift safeguards individual privacy and promotes an ethical framework for AI development that upholds the principles of fairness and impartiality.
As we stand at the crossroads of technological innovation and ethical consideration, it is crucial to advocate for systems that place data ownership firmly in the hands of users. By doing so, we can ensure that the future of AI remains truthful, non-ideological, and aligned with the broader interests of society.
## But what about the Matrix?
I know I am in the minority on this, but I feel that the concerns of AGI (Artificial General Intelligence) are generally overblown. I am not scared of reaching the point of AGI, and I think the idea that AI will become so intelligent that we will lose control of it is unfounded and silly. Reaching AGI is not reaching consciousness; being worried about it spontaneously gaining consciousness is a misplaced fear. It is a tool created by humans for humans to enhance productivity and achieve specific outcomes.
At a technical level, large language models (LLMs) are trained on extensive datasets and learning patterns from language and data through a technique called "unsupervised learning" (meaning the data is untagged). They predict the next word in sentences, refining their predictions through feedback to improve coherence and relevance. When queried, LLMs generate responses based on learned patterns, simulating an understanding of language to provide contextually appropriate answers. They will only answer based on the datasets that were inputted and scanned.
AI will never be "alive," meaning that AI lacks inherent agency, consciousness, and the characteristics of life, not capable of independent thought or action. AI cannot act independently of human control. Concerns about AI gaining autonomy and posing a threat to humanity are based on a misunderstanding of the nature of AI and the fundamental differences between living beings and machines. AI spontaneously developing a will or consciousness is more similar to thinking a hammer will start walking than us being able to create consciousness through programming. Right now, there is only one way to create consciousness, and I'm skeptical that is ever something we will be able to harness and create as humans. Irrespective of its complexity — and yes, our tools will continue to become evermore complex — machines, specifically AI, cannot transcend their nature as non-living, inanimate objects programmed and controlled by humans.
![](6u1bgq490h8c1.jpeg)
The advancement of AI should be seen as enhancing human capabilities, not as a path toward creating autonomous entities with their own wills. So, while AI will continue to evolve, improve, and become more powerful, I believe it will remain under human direction and control without the existential threats often sensationalized in discussions about AI's future.
With this framing, we should not view the race toward AGI as something to avoid. This will only make the tools we use more powerful, making us more productive. With all this being said, AGI is still much farther away than many believe.
Today's AI excels in specific, narrow tasks, known as narrow or weak AI. These systems operate within tightly defined parameters, achieving remarkable efficiency and accuracy that can sometimes surpass human performance in those specific tasks. Yet, this is far from the versatile and adaptable functionality that AGI represents.
Moreover, the exponential growth of computational power observed in the past decades does not directly translate to an equivalent acceleration in achieving AGI. AI's impressive feats are often the result of massive data inputs and computing resources tailored to specific tasks. These successes do not inherently bring us closer to understanding or replicating the general problem-solving capabilities of the human mind, which again would only make the tools more potent in _our_ hands.
While AI will undeniably introduce challenges and change the aspects of conflict and power dynamics, these challenges will primarily stem from humans wielding this powerful tool rather than the technology itself. AI is a mirror reflecting our own biases, values, and intentions. The crux of future AI-related issues lies not in the technology's inherent capabilities but in how it is used by those wielding it. This reality is at odds with the idea that we should slow down development as our biggest threat will come from those who are not friendly to us.
## AI Beget's AI
While the unknowns of AI development and its pitfalls indeed stir apprehension, it's essential to recognize the power of market forces and human ingenuity in leveraging AI to address these challenges. History is replete with examples of new technologies raising concerns, only for those very technologies to provide solutions to the problems they initially seemed to exacerbate. It looks silly and unfair to think of fighting a war with a country that never embraced oil and was still primarily getting its energy from burning wood.
![](Screenshot%202024-06-12%20at%205.13.16%E2%80%AFPM.png)
The evolution of AI is no exception to this pattern. As we venture into uncharted territories, the potential issues that arise with AI—be it ethical concerns, use by malicious actors, biases in decision-making, or privacy intrusions—are not merely obstacles but opportunities for innovation. It is within the realm of possibility, and indeed, probability, that AI will play a crucial role in solving the problems it creates. The idea that there would be no incentive to address and solve these problems is to underestimate the fundamental drivers of technological progress.
Market forces, fueled by the demand for better, safer, and more efficient solutions, are powerful catalysts for positive change. When a problem is worth fixing, it invariably attracts the attention of innovators, researchers, and entrepreneurs eager to solve it. This dynamic has driven progress throughout history, and AI is poised to benefit from this problem-solving cycle.
Thus, rather than viewing AI's unknowns as sources of fear, we should see them as sparks of opportunity. By tackling the challenges posed by AI, we will harness its full potential to benefit humanity. By fostering an ecosystem that encourages exploration, innovation, and problem-solving, we can ensure that AI serves as a force for good, solving problems as profound as those it might create. This is the optimism we must hold onto—a belief in our collective ability to shape AI into a tool that addresses its own challenges and elevates our capacity to solve some of society's most pressing issues.
## An AI Future
The reality is that it isn't whether AI will lead to unforeseen challenges—it undoubtedly will, as has every major technological leap in history. The real issue is whether we let fear dictate our path and confine us to a standstill or embrace AI's potential to address current and future challenges.
The approach to solving potential AI-related problems with stringent regulations and a slowdown in innovation is akin to cutting off the nose to spite the face. It's a strategy that risks stagnating the U.S. in a global race where other nations will undoubtedly continue their AI advancements. This perspective dangerously ignores that AI, much like the printing press of the past, has the power to democratize information, empower individuals, and dismantle outdated power structures.
The way forward is not less AI but more of it, more innovation, optimism, and curiosity for the remarkable technological breakthroughs that will come. We must recognize that the solution to AI-induced challenges lies not in retreating but in advancing our capabilities to innovate and adapt.
AI represents a frontier of limitless possibilities. If wielded with foresight and responsibility, it's a tool that can help solve some of the most pressing issues we face today. There are certainly challenges ahead, but I trust that with problems come solutions. Let's keep the AI Decels from steering us away from this path with their doomsday predictions. Instead, let's embrace AI with the cautious optimism it deserves, forging a future where technology and humanity advance to heights we can't imagine.
-
I have recently launched Wikifreedia, which is a different take on how Wikipedia-style systems can work.
Yes, it's built on nostr, but that's not the most interesting part.
The fascinating aspect is that there is no "official" entry on any topic. Anyone can create or edit any entry and build their own take about what they care about.
Think the entry about Mao is missing something? Go ahead and edit it, you don't need to ask for permission from anyone.
Stuart Bowman put it best on a #SovEng hike:
> The path to truth is in the integration of opposites.
Since launching Wikifreedia, less than a week ago, quite a few people asked me if it would be possible to import ALL of wikipedia into it.
Yes. Yes it would.
I initially started looking into it to make it happen as I am often quick to jump into action.
But, after thinking about it, *I am not convinced importing all of Wikipedia is the way to go*.
The magical thing about building an encyclopedia with no canonical entry on any topic is that each individual can bring to light the part they are interested the most about a certain topic, it can be dozens or hundreds, or perhaps more, entries that focus on the edges of a topic.
Whereas, Wikipedia, in their Quijotean approach to truth, have focused on the impossible path of seeking neutrality.
Humans can't be neutral, we have biases.
Show me an unbiased human and I'll show you a lifeless human.
*Biases are good*. Having an opinion is good. Seeking neutrality is seeking to devoid our views and opinions of humanity.
Importing Wikipedia would mean importing a massive amount of colorless trivia, a few interesting tidbits, but, more important than anything, a vast amount of watered-down useless information.
All edges of the truth having been neutered by a democratic process that searches for a single truth via consensus.
# "What's the worst that could happen?"
Sure, importing wikipedia would simply be *one* more entry on each topic.
Yes.
But culture has incredibly strong momentum.
And if the culture that develops in this type of media is that of exclusively watered-down comfortable truths, then some magic could be lost.
If people who are passionate or have a unique perspective about a topic feel like the "right approach" is to use the wikipedia-based article then I would see this as an extremely negative action.
### An alternative
An idea we discussed on the #SovEng hike was, what if the wikipedia entry is processed by different "AI agents" with different perspectives.
Perhaps instead of blankly importing the "Napoleon" article, an LLM trained to behave as a 1850s russian peasant could be asked to write a wiki about Napoleon. And then an agent tried to behave like Margaret Thatcher could write one.
Etc, etc.
Embrace the chaos. Embrace the bias.
-
In an era where data seems to be as valuable as currency, the prevailing trend in AI starkly contrasts with the concept of personal data ownership. The explosion of AI and the ensuing race have made it easy to overlook where the data is coming from. The current model, dominated by big tech players, involves collecting vast amounts of user data and selling it to AI companies for training LLMs. Reddit recently penned a 60 million dollar deal, Google guards and mines Youtube, and more are going this direction. But is that their data to sell? Yes, it's on their platforms, but without the users to generate it, what would they monetize? To me, this practice raises significant ethical questions, as it assumes that user data is a commodity that companies can exploit at will.
The heart of the issue lies in the ownership of data. Why, in today's digital age, do we not retain ownership of our data? Why can't our data follow us, under our control, to wherever we want to go? These questions echo the broader sentiment that while some in the tech industry — such as the blockchain-first crypto bros — recognize the importance of data ownership, their "blockchain for everything solutions," to me, fall significantly short in execution.
Reddit further complicates this with its current move to IPO, which, on the heels of the large data deal, might reinforce the mistaken belief that user-generated data is a corporate asset. Others, no doubt, will follow suit. This underscores the urgent need for a paradigm shift towards recognizing and respecting user data as personal property.
In my perfect world, the digital landscape would undergo a revolutionary transformation centered around the empowerment and sovereignty of individual data ownership. Platforms like Twitter, Reddit, Yelp, YouTube, and Stack Overflow, integral to our digital lives, would operate on a fundamentally different premise: user-owned data.
In this envisioned future, data ownership would not just be a concept but a practice, with public and private keys ensuring the authenticity and privacy of individual identities. This model would eliminate the private data silos that currently dominate, where companies profit from selling user data without consent. Instead, data would traverse a decentralized protocol akin to the internet, prioritizing user control and transparency.
The cornerstone of this world would be a meritocratic digital ecosystem. Success for companies would hinge on their ability to leverage user-owned data to deliver unparalleled value rather than their capacity to gatekeep and monetize information. If a company breaks my trust, I can move to a competitor, and my data, connections, and followers will come with me. This shift would herald an era where consent, privacy, and utility define the digital experience, ensuring that the benefits of technology are equitably distributed and aligned with the users' interests and rights.
The conversation needs to shift fundamentally. We must challenge this trajectory and advocate for a future where data ownership and privacy are not just ideals but realities. If we continue on our current path without prioritizing individual data rights, the future of digital privacy and autonomy is bleak. Big tech's dominance allows them to treat user data as a commodity, potentially selling and exploiting it without consent. This imbalance has already led to users being cut off from their digital identities and connections when platforms terminate accounts, underscoring the need for a digital ecosystem that empowers user control over data. Without changing direction, we risk a future where our content — and our freedoms by consequence — are controlled by a few powerful entities, threatening our rights and the democratic essence of the digital realm. We must advocate for a shift towards data ownership by individuals to preserve our digital freedoms and democracy.
-
I just hit the big red PUBLISH button on NDK 2.4.
(well, actually, I did it from the command line, the big red button is imaginary)
## Codename: Safely Embrace the Chaos
Nostr is a, mostly, friendly environment with not too many developers and no ill-intent among them.
But this will not always be the case. Compatibility issues, mistakes, and all kinds of chaos is to be expected.
This version of NDK introduces validation at the library-level so that clients built using NDK can rely on some guarantees on handled-kinds that events they are consuming comply with what is defined on their respective NIPs.
An example?
NIP-88, recurring subscriptions, defines that Tier events should have amount tags of what subscribers are expected to pay and the cadence.
Prior to NDK 2.4 some malformed events would render like this:
https://i.nostr.build/Y4JG.png
But with NDK 2.4 the malformed parts of the events don't reach the client code.
https://i.nostr.build/wAgJ.png
This is just a beginning, but event-validation is now moving into the library!
-
# Pequenos problemas que o Estado cria para a sociedade e que não são sempre lembrados
- **vale-transporte**: transferir o custo com o transporte do funcionário para um terceiro o estimula a morar longe de onde trabalha, já que morar perto é normalmente mais caro e a economia com transporte é inexistente.
- **atestado médico**: o direito a faltar o trabalho com atestado médico cria a exigência desse atestado para todas as situações, substituindo o livre acordo entre patrão e empregado e sobrecarregando os médicos e postos de saúde com visitas desnecessárias de assalariados resfriados.
- **prisões**: com dinheiro mal-administrado, burocracia e péssima alocação de recursos -- problemas que empresas privadas em competição (ou mesmo sem qualquer competição) saberiam resolver muito melhor -- o Estado fica sem presídios, com os poucos existentes entupidos, muito acima de sua alocação máxima, e com isto, segundo a bizarra corrente de responsabilidades que culpa o juiz que condenou o criminoso por sua morte na cadeia, juízes deixam de condenar à prisão os bandidos, soltando-os na rua.
- **justiça**: entrar com processos é grátis e isto faz proliferar a atividade dos advogados que se dedicam a criar problemas judiciais onde não seria necessário e a entupir os tribunais, impedindo-os de fazer o que mais deveriam fazer.
- **justiça**: como a justiça só obedece às leis e ignora acordos pessoais, escritos ou não, as pessoas não fazem acordos, recorrem sempre à justiça estatal, e entopem-na de assuntos que seriam muito melhor resolvidos entre vizinhos.
- **leis civis**: as leis criadas pelos parlamentares ignoram os costumes da sociedade e são um incentivo a que as pessoas não respeitem nem criem normas sociais -- que seriam maneiras mais rápidas, baratas e satisfatórias de resolver problemas.
- **leis de trãnsito**: quanto mais leis de trânsito, mais serviço de fiscalização são delegados aos policiais, que deixam de combater crimes por isto (afinal de contas, eles não querem de fato arriscar suas vidas combatendo o crime, a fiscalização é uma excelente desculpa para se esquivarem a esta responsabilidade).
- **financiamento educacional**: é uma espécie de subsídio às faculdades privadas que faz com que se criem cursos e mais cursos que são cada vez menos recheados de algum conhecimento ou técnica útil e cada vez mais inúteis.
- **leis de tombamento**: são um incentivo a que o dono de qualquer área ou construção "histórica" destrua todo e qualquer vestígio de história que houver nele antes que as autoridades descubram, o que poderia não acontecer se ele pudesse, por exemplo, usar, mostrar e se beneficiar da história daquele local sem correr o risco de perder, de fato, a sua propriedade.
- **zoneamento urbano**: torna as cidades mais espalhadas, criando uma necessidade gigantesca de carros, ônibus e outros meios de transporte para as pessoas se locomoverem das zonas de moradia para as zonas de trabalho.
- **zoneamento urbano**: faz com que as pessoas percam horas no trânsito todos os dias, o que é, além de um desperdício, um atentado contra a sua saúde, que estaria muito melhor servida numa caminhada diária entre a casa e o trabalho.
- **zoneamento urbano**: torna ruas e as casas menos seguras criando zonas enormes, tanto de residências quanto de indústrias, onde não há movimento de gente alguma.
- **escola obrigatória + currículo escolar nacional**: emburrece todas as crianças.
- **leis contra trabalho infantil**: tira das crianças a oportunidade de aprender ofícios úteis e levar um dinheiro para ajudar a família.
- **licitações**: como não existem os critérios do mercado para decidir qual é o melhor prestador de serviço, criam-se comissões de pessoas que vão decidir coisas. isto incentiva os prestadores de serviço que estão concorrendo na licitação a tentar comprar os membros dessas comissões. isto, fora a corrupção, gera problemas reais: __(i)__ a escolha dos serviços acaba sendo a pior possível, já que a empresa prestadora que vence está claramente mais dedicada a comprar comissões do que a fazer um bom trabalho (este problema afeta tantas áreas, desde a construção de estradas até a qualidade da merenda escolar, que é impossível listar aqui); __(ii)__ o processo corruptor acaba, no longo prazo, eliminando as empresas que prestavam e deixando para competir apenas as corruptas, e a qualidade tende a piorar progressivamente.
- **cartéis**: o Estado em geral cria e depois fica refém de vários grupos de interesse. o caso dos taxistas contra o Uber é o que está na moda hoje (e o que mostra como os Estados se comportam da mesma forma no mundo todo).
- **multas**: quando algum indivíduo ou empresa comete uma fraude financeira, ou causa algum dano material involuntário, as vítimas do caso são as pessoas que sofreram o dano ou perderam dinheiro, mas o Estado tem sempre leis que prevêem multas para os responsáveis. A justiça estatal é sempre muito rígida e rápida na aplicação dessas multas, mas relapsa e vaga no que diz respeito à indenização das vítimas. O que em geral acontece é que o Estado aplica uma enorme multa ao responsável pelo mal, retirando deste os recursos que dispunha para indenizar as vítimas, e se retira do caso, deixando estas desamparadas.
- **desapropriação**: o Estado pode pegar qualquer propriedade de qualquer pessoa mediante uma indenização que é necessariamente inferior ao valor da propriedade para o seu presente dono (caso contrário ele a teria vendido voluntariamente).
- **seguro-desemprego**: se há, por exemplo, um prazo mínimo de 1 ano para o sujeito ter direito a receber seguro-desemprego, isto o incentiva a planejar ficar apenas 1 ano em cada emprego (ano este que será sucedido por um período de desemprego remunerado), matando todas as possibilidades de aprendizado ou aquisição de experiência naquela empresa específica ou ascensão hierárquica.
- **previdência**: a previdência social tem todos os defeitos de cálculo do mundo, e não importa muito ela ser uma forma horrível de poupar dinheiro, porque ela tem garantias bizarras de longevidade fornecidas pelo Estado, além de ser compulsória. Isso serve para criar no imaginário geral a idéia da __aposentadoria__, uma época mágica em que todos os dias serão finais de semana. A idéia da aposentadoria influencia o sujeito a não se preocupar em ter um emprego que faça sentido, mas sim em ter um trabalho qualquer, que o permita se aposentar.
- **regulamentação impossível**: milhares de coisas são proibidas, há regulamentações sobre os aspectos mais mínimos de cada empreendimento ou construção ou espaço. se todas essas regulamentações fossem exigidas não haveria condições de produção e todos morreriam. portanto, elas não são exigidas. porém, o Estado, ou um agente individual imbuído do poder estatal pode, se desejar, exigi-las todas de um cidadão inimigo seu. qualquer pessoa pode viver a vida inteira sem cumprir nem 10% das regulamentações estatais, mas viverá também todo esse tempo com medo de se tornar um alvo de sua exigência, num estado de terror psicológico.
- **perversão de critérios**: para muitas coisas sobre as quais a sociedade normalmente chegaria a um valor ou comportamento "razoável" espontaneamente, o Estado dita regras. estas regras muitas vezes não são obrigatórias, são mais "sugestões" ou limites, como o salário mínimo, ou as 44 horas semanais de trabalho. a sociedade, porém, passa a usar esses valores como se fossem o normal. são raras, por exemplo, as ofertas de emprego que fogem à regra das 44h semanais.
- **inflação**: subir os preços é difícil e constrangedor para as empresas, pedir aumento de salário é difícil e constrangedor para o funcionário. a inflação força as pessoas a fazer isso, mas o aumento não é automático, como alguns economistas podem pensar (enquanto alguns outros ficam muito satisfeitos de que esse processo seja demorado e difícil).
- **inflação**: a inflação destrói a capacidade das pessoas de julgar preços entre concorrentes usando a própria memória.
- **inflação**: a inflação destrói os cálculos de lucro/prejuízo das empresas e prejudica enormemente as decisões empresariais que seriam baseadas neles.
- **inflação**: a inflação redistribui a riqueza dos mais pobres e mais afastados do sistema financeiro para os mais ricos, os bancos e as megaempresas.
- **inflação**: a inflação estimula o endividamento e o consumismo.
- **lixo:** ao prover coleta e armazenamento de lixo "grátis para todos" o Estado incentiva a criação de lixo. se tivessem que pagar para que recolhessem o seu lixo, as pessoas (e conseqüentemente as empresas) se empenhariam mais em produzir coisas usando menos plástico, menos embalagens, menos sacolas.
- **leis contra crimes financeiros:** ao criar legislação para dificultar acesso ao sistema financeiro por parte de criminosos a dificuldade e os custos para acesso a esse mesmo sistema pelas pessoas de bem cresce absurdamente, levando a um percentual enorme de gente incapaz de usá-lo, para detrimento de todos -- e no final das contas os grandes criminosos ainda conseguem burlar tudo.
-
# O Bitcoin como um sistema social humano
Afinal de contas, o que é o Bitcoin? Não vou responder a essa pergunta explicando o que é uma "blockchain" ou coisa que o valha, como todos fazem muito pessimamente. [A melhor explicação em português que eu já vi está aqui](nostr:naddr1qqrky6t5vdhkjmspz9mhxue69uhkv6tpw34xze3wvdhk6q3q80cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsxpqqqp65wp3k3fu), mas mesmo assim qualquer explicação jamais será definitiva.
A explicação apenas do protocolo, do que faz um programa `bitcoind` sendo executado em um computador e como ele se comunica com outros em outros computadores, e os incentivos que estão em jogo para garantir com razoável probabilidade que se chegará a um consenso sobre quem é dono de qual parte de qual transação, apesar de não ser complicada demais, exigirá do iniciante que seja compreendida muitas vezes antes que ele se possa se sentir confortável para dizer que entende um pouco.
E essa parte _técnica_, apesar de ter sido o insight fundamental que gerou o evento miraculoso chamado Bitcoin, não é a parte mais importante, hoje. Se fosse, várias dessas outras moedas seriam concorrentes do Bitcoin, mas não são, e jamais poderão ser, porque elas não estão nem próximas de ter os outros elementos que compõem o Bitcoin. São eles:
1. A estrutura
O Bitcoin é um sistema composto de partes independentes.
Existem programadores que trabalham no protocolo e aplicações, e dia após dia novos programadores chegam e outros saem, e eles trabalham às vezes em conjunto, às vezes sem que um se dê conta do outro, às vezes por conta própria, às vezes pagos por empresas interessadas.
Existem os usuários que realizam validação completa, isto é, estão rodando algum programa do Bitcoin e contribuindo para a difusão dos blocos, das transações, rejeitando usuários malignos e evitando ataques de mineradores mal-intencionados.
Existem os poupadores, acumuladores ou os proprietários de bitcoins, que conhecem as possibilidades que o mundo reserva para o Bitcoin, esperam o dia em que o padrão-Bitcoin será uma realidade mundial e por isso mesmo atributem aos seus bitcoins valores muito mais altos do que os preços atuais de mercado, agarrando-se a eles.
Especuladores de "criptomoedas" não fazem parte desse sistema, nem tampouco empresas que [aceitam pagamento](https://bitpay.com/) em bitcoins para imediatamente venderem tudo em troca de dinheiro estatal, e menos ainda [gente que usa bitcoins](https://www.investimentobitcoin.com/) e [a própria marca Bitcoin](https://www.xdex.com.br/) para aplicar seus golpes e coisas parecidas.
2. A cultura
Mencionei que há empresas que pagam programadores para trabalharem no código aberto do BitcoinCore ou de outros programas relacionados à rede Bitcoin -- ou mesmo em aplicações não necessariamente ligadas à camada fundamental do protocolo. Nenhuma dessas empresas interessadas, porém, controla o Bitcoin, e isso é o elemento principal da cultura do Bitcoin.
O propósito do Bitcoin sempre foi ser uma rede aberta, sem chefes, sem política envolvida, sem necessidade de pedir autorização para participar. O fato do próprio Satoshi Nakamoto ter voluntariamente desaparecido das discussões foi fundamental para que o Bitcoin não fosse visto como um sistema dependente dele ou que ele fosse entendido como o chefe. Em outras "criptomoedas" nada disso aconteceu. O chefe supremo do Ethereum continua por aí mandando e desmandando e inventando novos elementos para o protocolo que são automaticamente aceitos por toda a comunidade, o mesmo vale para o Zcash, EOS, Ripple, Litecoin e até mesmo para o Bitcoin Cash. Pior ainda: Satoshi Nakamoto saiu sem nenhum dinheiro, nunca mexeu nos milhares de bitcoins que ele gerou nos primeiros blocos -- enquanto os líderes dessas porcarias supramencionadas cobraram uma fortuna pelo direito de uso dos seus primeiros usuários ou estão aí a até hoje receber dividendos.
Tudo isso e mais outras coisas -- a mentalidade anti-estatal e entusiasta de sistemas p2p abertos dos membros mais proeminentes da comunidade, por exemplo -- faz com que um ar de liberdade e suspeito de tentativas de centralização da moeda sejam percebidos e execrados.
3. A história
A noção de que o Bitcoin não pode ser controlado por ninguém passou em 2017 por [dois testes](https://www.forbes.com/sites/ktorpey/2019/04/23/this-key-part-of-bitcoins-history-is-what-separates-it-from-competitors/#49869b41ae5e) e saiu deles muito reforçada: o primeiro foi a divisão entre Bitcoin (BTC) e Bitcoin Cash (BCH), uma obra de engenharia social que teve um sucesso mediano em roubar parte da marca e dos usuários do verdadeiro Bitcoin e depois a tentativa de tomada por completo do Bitcoin promovida por mais ou menos as mesmas partes interessadas chamada SegWit2x, que fracassou por completo, mas não sem antes atrapalhar e difundir mentiras para todos os lados. Esses dois fracassos provaram que o Bitcoin, mesmo sendo uma comunidade desorganizada, sem líderes claros, está imune à [captura por grupos interessados](https://en.wikipedia.org/wiki/Regulatory_capture), o que é mais um milagre -- ou, como dizem, um [ponto de Schelling](https://en.wikipedia.org/wiki/Focal_point_(game_theory)).
Esse período crucial na história do Bitcoin fez com ficasse claro que _hard-forks_ são essencialmente incompatíveis com a natureza do protocolo, de modo que no futuro não haverá a possibilidade de uma sugestão como a de imprimir mais bitcoins do que o que estava programado sejam levadas a sério (mas, claro, sempre há a possibilidade da cultura toda se perder, as pessoas esquecerem a história e o Bitcoin ser cooptado, eis a importância da auto-educação e da difusão desses princípios).
-
# O caso da Grêmio TV
enquanto vinha se conduzindo pela plataforma superior daquela arena que se pensava totalmente preenchida por adeptos da famosa equipe do Grêmio de Porto Alegre, viu-se, como por obra de algum nigromante - dos muitos que existem e estão a todo momento a fazer más obras e a colocar-se no caminhos dos que procuram, se não fazer o bem acima de todas as coisas, a pelo menos não fazer o mal no curso da realização dos seus interesses -, o discretíssimo jornalista a ser xingado e moído em palavras por uma horda de malandrinos a cinco ou seis passos dele surgida que cantavam e moviam seus braços em movimentos que não se pode classificar senão como bárbaros, e assim cantavam:
Grêmio TV
pior que o SBT !
-
# My stupid introduction to Haskell
While I was writing my first small program on Haskell (really simple, but functional webapp) in December 2017 I only knew vaguely what was the style of things, some basic notions about functions, pure functions and so on (I've read about a third of [LYAH](http://learnyouahaskell.com/chapters)).
An enourmous amount of questions began to appear in my head while I read tutorials and documentation. Here I present some of the questions and the insights I got that solved them. Technically, they may be wrong, but they helped me advance in the matter, so I'm writing them down while I still can -- If I keep working with Haskell I'll probably get to know more and so my new insights will replace the previous ones, and the new ones won't be useful for total begginers anymore.
---
Here we go:
- **Why do modules have odd names?**
- modules are the things you import, like `Data.Time.Clock` or `Web.Scotty`.
- packages are the things you install, like 'time' or 'scotty'
- packages can contain any number of modules they like
- a module is just a collection of functions
- a package is just a collection of modules
- a package is just name you choose to associate your collection of modules with when you're publishing it to Hackage or whatever
- the module names you choose when you're writing a package can be anything, and these are the names people will have to `import` when they want to use you functions
- if you're from Javascript, Python or anything similar, you'll expect to be importing/writing the name of the package directly in your code, but in Haskell you'll actually be writing the name of the module, which may have nothing to do with the name of the package
- people choose things that make sense, like for `aeson` instead of `import Aeson` you'll be doing `import Data.Aeson`, `import Data.Aeson.Types` etc. why the `Data`? because they thought it would be nice. dealing with JSON is a form of dealing with data, so be it.
- you just have to check the package documentation to see which modules it exposes.
- **What is `data User = User { name :: Text }`?**
- a data type definition. means you'll have a function `User` that will take a Text parameter and output a `User` record or something like that.
- you can also have `Animal = Giraffe { color :: Text } | Human { name :: Text }`, so you'll have two functions, Giraffe and Human, each can take a different set of parameters, but they will both yield an Animal.
- then, in the functions that take an Animal parameter you must typematch to see if the animal is a giraffe or a human.
- **What is a monad?**
- a monad is a context, an environment.
- when you're in the context of a monad you can write imperative code.
- you do that when you use the keyword `do`.
- in the context of a monad, all values are prefixed by the monad type,
- thus, in the `IO` monad all `Text` is `IO Text` and so on.
- some monads have a relationship with others, so values from that monad can be turned into values from another monad and passed between context easily.
- for exampĺe, [scotty](https://www.stackage.org/haddock/lts-9.18/scotty-0.11.0/Web-Scotty.html)'s `ActionM` and `IO`. `ActionM` is just a subtype of `IO` or something like that.
- when you write imperative code inside a monad you can do assignments like `varname <- func x y`
- in these situations some transformation is done by the `<-`, I believe it is that the pure value returned by `func` is being transformed into a monad value. so if `func` returns `Text`, now varname is of type `IO Text` (if we're in the IO monad).
- so it will not work (and it can be confusing) if you try to concatenate functions like `varname <- transform $ func x y`, but you can somehow do
- `varname <- func x y`
- `othervarname <- transform varname`
- or you can do other fancy things you'll get familiar with later, like `varname <- fmap transform $ func x y`
- why? I don't know.
- **How do I deal with Maybe, Either or other crazy stuff?**
"ok, I understand what is a Maybe: it is a value that could be something or nothing. but how do I use that in my program?"
- you don't! you turn it into other thing. for example, you use [fromMaybe](http://hackage.haskell.org/package/base-4.10.1.0/docs/Data-Maybe.html#v:fromMaybe), a function that takes a default value and that's it. if your `Maybe` is `Just x` you get `x`, if it is `Nothing` you get the default value.
- using only that function you can already do whatever there is to be done with Maybes.
- you can also manipulate the values inside the `Maybe`, for example:
- if you have a `Maybe Person` and `Person` has a `name` which is `Text`, you can apply a function that turns `Maybe Person` into `Maybe Text` AND ONLY THEN you apply the default value (which would be something like the `"unnamed"`) and take the name from inside the `Maybe`.
- basically these things (`Maybe`, `Either`, `IO` also!) are just tags. they tag the value, and you can do things with the values inside them, or you can remove the values.
- besides the example above with Maybes and the `fromMaybe` function, you can also remove the values by using `case` -- for example:
- `case x of`
- `Left error -> error`
- `Right success -> success`
- `case y of`
- `Nothing -> "nothing!"`
- `Just value -> value`
- (in some cases I believe you can't remove the values, but in these cases you'll also don't need to)
- for example, for values tagged with the IO, you can't remove the IO and turn these values into pure values, but you don't need that, you can just take the value from the outside world, so it's a IO Text, apply functions that modify that value inside IO, then output the result to the user -- this is enough to make a complete program, any complete program.
- **JSON and interfaces (or instances?)**
- using [Aeson](https://hackage.haskell.org/package/aeson-1.2.3.0/docs/Data-Aeson.html) is easy, you just have to implement the `ToJSON` and `FromJSON` interfaces.
- "interface" is not the correct name, but I don't care.
- `ToJSON`, for example, requires a function named `toJSON`, so you do
- `instance ToJSON YourType where`
- `toJSON (YourType your type values) = object []` ... etc.
- I believe lots of things require interface implementation like this and it can be confusing, but once you know the mystery of implementing functions for interfaces everything is solved.
- `FromJSON` is a little less intuitive at the beggining, and I don't know if I did it correctly, but it is working here. Anyway, if you're trying to do that, I can only tell you to follow the types, copy examples from other places on the internet and don't care about the meaning of symbols.
## See also
* [Haskell Monoids](nostr:naddr1qqyrzcmrvcmrqwtpqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cktq992)
-
# Classless Templates
There are way too many hours being wasted in making themes for blogs. And then comes a new blog framework, it requires new themes. Old themes can't be used because they relied on different ways of rendering the website. Everything is a mess.
Classless was an attempt at solving it. It probably didn't work because I wasn't the best person to make themes and showcase the thing.
Basically everybody would agree on a simple HTML template that could fit blogs and simple websites very easily. Then other people would make pure-CSS themes expecting that template to be in place.
No classes were needed, only a fixed structure of `header`. `main`, `article` etc.
With **flexbox** and **grid** CSS was enough to make this happen.
The templates that were available were all ported by me from other templates I saw on the web, and there was a simple one I created for my old website.
- <https://github.com/fiatjaf/classless>
- <https://classless.alhur.es/>
- <https://classless.alhur.es/themes/>
-
# The "Drivechain will replace altcoins" argument
The argument that [Drivechain](nostr:naddr1qq9xgunfwejkx6rpd9hqzythwden5te0ve5kzar2v9nzucm0d5pzqwlsccluhy6xxsr6l9a9uhhxf75g85g8a709tprjcn4e42h053vaqvzqqqr4gumtjfnp) will replace shitcoins is _not_ that people will [sell their shitcoins](https://twitter.com/craigwarmke/status/1680371502246514688) or that the existing shitcoins will instantly vanish. The argument is about a change _at the margin_ that eventually ends up killing the shitcoins or reducing them to their original insignificance.
**What does "at the margin" mean?** For example, when the price of the coconut drops a little in relation to bananas, does that mean that everybody will stop buying bananas and will buy only coconuts now? No. Does it mean there will be [zero](https://twitter.com/GoingParabolic/status/1680319173744836609) increase in the amount of coconuts sold? Also no. What happens is that there is a small number of people who would have preferred to buy coconuts if only they were a little less expensive but end up buying bananas instead. When the price of coconut drops these people buy coconuts and don't buy bananas.
The argument is that the same thing will happen when Drivechain is activated: there are some people today (yes, believe me) that would have preferred to work within the Bitcoin ecosystem but end up working on shitcoins. In a world with Drivechain these people would be working on the Bitcoin ecosystem, for the benefit of Bitcoin and the Bitcoiners.
Why would they prefer Bitcoin? Because Bitcoin has a bigger network-effect. When these people come, they increase Bitocin's network-effect even more, and if they don't go to the shitcoins they reduce the shitcoins' network-effect. Those changes in network-effect contribute to bringing others who were a little further from the margin and the thing compounds until the shitcoins are worthless.
Who are these people at the margin? I don't know, but they certainly exist. I would guess the Stark people are one famous example, but there are many others. In the past, examples included Roger Ver, Zooko Wilcox, Riccardo Spagni and Vitalik Buterin. And before you start screaming that these people are shitcoiners (which they are) imagine how much bigger Bitcoin could have been today if they and their entire communities (yes, I know, of awful people) were using and working for Bitcoin today. Remember that phrase about Bitcoin being for enemies?
### But everything that has been invented in the altcoin world is awful, we don't need any of that!
You and me should not be the ones judging what is good and what is not for others, but both you and me and others will benefit if these things can be done in a way that increases Bitcoin network-effect and pays fees to Bitcoin miners.
Also, there is a much stronger point you may have not considered: if you believe all altcoiners are scammers that means we have only seen the things that were invented by scammers, since all honest people that had good ideas decided to not implement them as the only way to do it would be to create a scammy shitcoin. One example is [Bitcoin Hivemind](nostr:naddr1qqyxs6tkv4kkjmnyqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cd3vm3c).
If it is possible to do these ideas without creating shitcoins we may start to see new things that are actually good.
-
# A list of things artificial intelligence is not doing
If AI is so good why can't it:
- write good glue code that wraps a documented HTTP API?
- make good translations using available books and respective published translations?
- extract meaningful and relevant numbers from news articles?
- write mathematical models that fit perfectly to available data better than any human?
- play videogames without cheating (i.e. simulating human vision, attention and click speed)?
- turn pure HTML pages into pretty designs by generating CSS
- predict the weather
- calculate building foundations
- determine stock values of companies from publicly available numbers
- smartly and automatically test software to uncover bugs before releases
- predict sports matches from the ball and the players' movement on the screen
- continuously improve niche/local search indexes based on user input and and reaction to results
- control traffic lights
- predict sports matches from news articles, and teams and players' history
This was posted first on [Twitter](https://twitter.com/fiatjaf/status/1477942802805837827).
-
# Um algoritmo imbecil da evolução
Suponha que você queira escrever a palavra BANANA partindo de OOOOOO e usando só alterações aleatórias das letras. As alterações se dão por meio da multiplicação da palavra original em várias outras, cada uma com uma mudança diferente.
No primeiro período, surgem BOOOOO e OOOOZO. E então o ambiente decide que todas as palavras que não começam com um B estão eliminadas. Sobra apenas BOOOOO e o algoritmo continua.
É fácil explicar conceber a evolução das espécies acontecendo dessa maneira, se você controlar sempre a parte em que o ambiente decide quem vai sobrar.
Porém, há apenas duas opções:
1. Se o ambiente decidir as coisas de maneira aleatória, a chance de você chegar na palavra correta usando esse método é tão pequena que pode ser considerada nula.
2. Se o ambiente decidir as coisas de maneira pensada, caímos no //design inteligente//.
Acredito que isso seja uma enunciação decente do argumento ["no free lunch"](https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization) aplicado à crítica do darwinismo por William Dembski.
A resposta darwinista consiste em dizer que não existe essa BANANA como objetivo final. Que as palavras podem ir se alterando aleatoriamente, e o que sobrar sobrou, não podemos dizer que um objetivo foi atingido ou deixou de sê-lo. E aí os defensores do design inteligente dirão que o resultado ao qual chegamos não pode ter sido fruto de um processo aleatório. BANANA é qualitativamente diferente de AYZOSO, e aí há várias maneiras de "provar" que sim usando modelos matemáticos e tal.
Fico com a impressão, porém, de que essa coisa só pode ser resolvida como sim ou não mediante uma discussão das premissas, e chega um ponto em que não há mais provas matemáticas possíveis, apenas subjetividade.
Daí eu me lembro da minha humilde solução ao problema do cão que aperta as teclas aleatoriamente de um teclado e escreve as obras completas de Shakespeare: mesmo que ele o faça, nada daquilo terá sentido sem uma inteligência de tipo humano ali para lê-las e perceber que não se trata de uma bagunça, mas sim de um texto com sentido para ele. O milagre se dá não no momento em que o cão tropeça no teclado, mas no momento em que o homem olha para a tela.
Se o algoritmo da evolução chegou à palavra BANANA ou UXJHTR não faz diferença pra ela, mas faz diferença para nós, que temos uma inteligência humana, e estamos observando aquilo. O homem também pensaria que há //algo// por trás daquele evento do cão que digita as obras de Shakespeare, e como seria possível alguém em sã consciência pensar que não?
-
# Per Bylund's insight
The firm doesn't exist because, like [Coase said](https://en.wikipedia.org/?curid=83030), it is inefficient to operate in a fully open-market and production processes need some bubbles of central planning.
Instead, what happens is that a firm is created because an entrepreneur is doing a new thing (and here I imagine that doing an old thing in a new context also counts as doing a new thing, but I didn't read his book), and for that new thing there is no market, there are no specialized workers offering the services needed, nor other businesses offering the higher-order goods that entrepreneur wants, so he must do all by himself.
So the entrepreneur goes and hires workers and buys materials more generic than he wanted and commands these to build what he wants exactly. It is less efficient than if he could buy the precise services and goods he wanted and combine those to yield the product he envisaged, but it accomplishes the goal.
Later, when that specific market evolves, it's natural that specialized workers and producers of the specific factors begin to appear, and the market gets decentralized.
-
# `OP_CHECKTEMPLATEVERIFY` and the "covenants" drama
There are many ideas for "covenants" (I don't think this concept helps in the specific case of examining proposals, but fine). Some people think "we" (it's not obvious who is included in this group) should somehow examine them and come up with the perfect synthesis.
It is not clear what form this magic gathering of ideas will take and who (or which ideas) will be allowed to speak, but suppose it happens and there is intense research and conversations and people (ideas) really enjoy themselves in the process.
What are we left with at the end? Someone has to actually commit the time and put the effort and come up with a concrete proposal to be implemented on Bitcoin, and whatever the result is it will have trade-offs. Some great features will not make into this proposal, others will make in a worsened form, and some will be contemplated very nicely, there will be some extra costs related to maintenance or code complexity that will have to be taken. Someone, a concreate person, will decide upon these things using their own personal preferences and biases, and many people will not be pleased with their choices.
That has already happened. Jeremy Rubin has already conjured all the covenant ideas in a magic gathering that lasted more than 3 years and came up with a synthesis that has the best trade-offs he could find. CTV is the result of that operation.
---
The fate of CTV in the popular opinion illustrated by the thoughtless responses it has evoked such as "can we do better?" and "we need more review and research and more consideration of other ideas for covenants" is a preview of what would probably happen if these suggestions were followed again and someone spent the next 3 years again considering ideas, talking to other researchers and came up with a new synthesis. Again, that person would be faced with "can we do better?" responses from people that were not happy enough with the choices.
And unless some famous Bitcoin Core or retired Bitcoin Core developers were personally attracted by this synthesis then they would take some time to review and give their blessing to this new synthesis.
To summarize the argument of this article, the actual question in the current CTV drama is that there exists hidden criteria for proposals to be accepted by the general community into Bitcoin, and no one has these criteria clear in their minds. It is not as simple not as straightforward as "do research" nor it is as humanly impossible as "get consensus", it has a much bigger social element into it, but I also do not know what is the exact form of these hidden criteria.
This is said not to blame anyone -- except the ignorant people who are not aware of the existence of these things and just keep repeating completely false and unhelpful advice for Jeremy Rubin and are not self-conscious enough to ever realize what they're doing.
-
# IPFS problems: Conceit
IPFS is trying to do many things. The IPFS leaders are revolutionaries who think they're smarter than the rest of the entire industry.
The fact that they've first proposed a protocol for peer-to-peer distribution of immutable, content-addressed objects, then later tried to fix [that same problem](nostr:naddr1qqyrqen9xf3nvdpeqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cmdjnnj) using their own half-baked solution (IPNS) is one example.
Other examples are their odd appeal to decentralization in a very non-specific way, their excessive [flirtation with Ethereum](nostr:naddr1qqyxxdpev5cnsvpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cta4a2e) and their never-to-be-finished can-never-work-as-advertised _Filecoin_ project.
They could have focused on just making the infrastructure for distribution of objects through hashes (not saying this would actually be a good idea, but it had some potential) over a peer-to-peer network, but in trying to reinvent the entire internet they screwed everything up.
-
# Replacing the web with something saner
This is a simplification, but let's say that basically there are just 3 kinds of websites:
1. Websites with content: text, images, videos;
2. Websites that run full apps that do a ton of interactive stuff;
3. Websites with some interactive content that uses JavaScript, or "mini-apps";
In a saner world we would have 3 different ways of serving and using these. 1 would be "the web" (and it was for a while, although I'm not claiming here that the past is always better and wanting to get back to the glorious old days).
1 would stay as "the web", just static sites, styled with CSS, no JavaScript whatsoever, but designers can still thrive and make they look pretty. Or it could also be something like [Gemini](https://gemini.circumlunar.space/). Maybe the two protocols could coexist.
2 would be downloadable native apps, much easier to write and maintain for developers (considering that multi-platform and cross-compilation is easy today and getting easier), faster, more polished experience for users, more powerful, integrates better with the computer.
(Remember that since no one would be striving to make the same app run both on browsers and natively no one would have any need for Electron or other inefficient bloated solutions, just pure native UI, like the Telegram app, have you seen that? It's fast.)
But 2 is mostly for apps that people use every day, something like Google Docs, email (although email is also broken technology), Netflix, Twitter, Trello and so on, and all those hundreds of niche SaaS that people pay monthly fees to use, each tailored to a different industry (although most of functions they all implement are the same everywhere). What do we do with dynamic open websites like StackOverflow, for example, where one needs to not only read, but also search and interact in multiple ways? What about that website that asks you a bunch of questions and then discovers the name of the person you're thinking about? What about that mini-app that calculates the hash of your provided content or shrinks your video, or that one that hosts your image without asking any questions?
All these and tons of others would fall into category 3, that of instantly loaded apps that you don't have to install, and yet they run in a sandbox.
The key for making category 3 worth investing time into is coming up with some solid grounds, simple enough that anyone can implement in multiple different ways, but not giving the app too much choices.
Telegram or Discord bots are super powerful platforms that can accomodate most kinds of app in them. They can't beat a native app specifically made with one purpose, but they allow anyone to provide instantly usable apps with very low overhead, and since the experience is so simple, intuitive and fast, users tend to like it and sometimes even pay for their services. There could exist a protocol that brings apps like that to the open world of (I won't say "web") domains and the websockets protocol -- with multiple different clients, each making their own decisions on how to display the content sent by the servers that are powering these apps.
Another idea is that of [Alan Kay](https://www.quora.com/Should-web-browsers-have-stuck-to-being-document-viewers/answer/Alan-Kay-11): to design a nice little OS/virtual machine that can load these apps and run them. Kinda like browsers are today, but providing a more well-thought, native-like experience and framework, but still sandboxed. And I add: abstracting away details about design, content disposition and so on.
---
These 3 kinds of programs could coexist peacefully. 2 are just standalone programs, they can do anything and each will be its own thing. 1 and 3, however, are still similar to browsers of today in the sense that you need clients to interact with servers and show to the user what they are asking. But by simplifying everything and separating the scopes properly these clients would be easy to write, efficient, small, the environment would be open and the internet would be saved.
## See also
- [On the state of programs and browsers](nostr:naddr1qqyxgdfe8qexvd34qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cd7nk4m)
-
# nostr - Notes and Other Stuff Transmitted by Relays
The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all.
It doesn't rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, therefore it works.
## Very short summary of how it works, if you don't plan to read anything else:
Everybody runs a client. It can be a native client, a web client, etc. To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself). To get updates from other people, you ask multiple relays if they know anything about these other people. Anyone can run a relay. A relay is very simple and dumb. It does nothing besides accepting posts from some people and forwarding to others. Relays don't have to be trusted. Signatures are verified on the client side.
## This is needed because other solutions are broken:
### The problem with Twitter
- Twitter has ads;
- Twitter uses bizarre techniques to keep you addicted;
- Twitter doesn't show an actual historical feed from people you follow;
- Twitter bans people;
- Twitter shadowbans people.
- Twitter has a lot of spam.
### The problem with Mastodon and similar programs
- User identities are attached to domain names controlled by third-parties;
- Server owners can ban you, just like Twitter; Server owners can also block other servers;
- Migration between servers is an afterthought and can only be accomplished if servers cooperate. It doesn't work in an adversarial environment (all followers are lost);
- There are no clear incentives to run servers, therefore they tend to be run by enthusiasts and people who want to have their name attached to a cool domain. Then, users are subject to the despotism of a single person, which is often worse than that of a big company like Twitter, and they can't migrate out;
- Since servers tend to be run amateurishly, they are often abandoned after a while — which is effectively the same as banning everybody;
- It doesn't make sense to have a ton of servers if updates from every server will have to be painfully pushed (and saved!) to a ton of other servers. This point is exacerbated by the fact that servers tend to exist in huge numbers, therefore more data has to be passed to more places more often;
- For the specific example of video sharing, ActivityPub enthusiasts realized it would be completely impossible to transmit video from server to server the way text notes are, so they decided to keep the video hosted only from the single instance where it was posted to, which is similar to the Nostr approach.
### The problem with SSB (Secure Scuttlebutt)
- It doesn't have many problems. I think it's great. In fact, I was going to use it as a basis for this, but
- its protocol is too complicated because it wasn't thought about being an open protocol at all. It was just written in JavaScript in probably a quick way to solve a specific problem and grew from that, therefore it has weird and unnecessary quirks like signing a JSON string which must strictly follow the rules of [_ECMA-262 6th Edition_](https://www.ecma-international.org/ecma-262/6.0/#sec-json.stringify);
- It insists on having a chain of updates from a single user, which feels unnecessary to me and something that adds bloat and rigidity to the thing — each server/user needs to store all the chain of posts to be sure the new one is valid. Why? (Maybe they have a good reason);
- It is not as simple as Nostr, as it was primarily made for P2P syncing, with "pubs" being an afterthought;
- Still, it may be worth considering using SSB instead of this custom protocol and just adapting it to the client-relay server model, because reusing a standard is always better than trying to get people in a new one.
### The problem with other solutions that require everybody to run their own server
- They require everybody to run their own server;
- Sometimes people can still be censored in these because domain names can be censored.
## How does Nostr work?
- There are two components: __clients__ and __relays__. Each user runs a client. Anyone can run a relay.
- Every user is identified by a public key. Every post is signed. Every client validates these signatures.
- Clients fetch data from relays of their choice and publish data to other relays of their choice. A relay doesn't talk to another relay, only directly to users.
- For example, to "follow" someone a user just instructs their client to query the relays it knows for posts from that public key.
- On startup, a client queries data from all relays it knows for all users it follows (for example, all updates from the last day), then displays that data to the user chronologically.
- A "post" can contain any kind of structured data, but the most used ones are going to find their way into the standard so all clients and relays can handle them seamlessly.
## How does it solve the problems the networks above can't?
- **Users getting banned and servers being closed**
- A relay can block a user from publishing anything there, but that has no effect on them as they can still publish to other relays. Since users are identified by a public key, they don't lose their identities and their follower base when they get banned.
- Instead of requiring users to manually type new relay addresses (although this should also be supported), whenever someone you're following posts a server recommendation, the client should automatically add that to the list of relays it will query.
- If someone is using a relay to publish their data but wants to migrate to another one, they can publish a server recommendation to that previous relay and go;
- If someone gets banned from many relays such that they can't get their server recommendations broadcasted, they may still let some close friends know through other means with which relay they are publishing now. Then, these close friends can publish server recommendations to that new server, and slowly, the old follower base of the banned user will begin finding their posts again from the new relay.
- All of the above is valid too for when a relay ceases its operations.
- **Censorship-resistance**
- Each user can publish their updates to any number of relays.
- A relay can charge a fee (the negotiation of that fee is outside of the protocol for now) from users to publish there, which ensures censorship-resistance (there will always be some Russian server willing to take your money in exchange for serving your posts).
- **Spam**
- If spam is a concern for a relay, it can require payment for publication or some other form of authentication, such as an email address or phone, and associate these internally with a pubkey that then gets to publish to that relay — or other anti-spam techniques, like hashcash or captchas. If a relay is being used as a spam vector, it can easily be unlisted by clients, which can continue to fetch updates from other relays.
- **Data storage**
- For the network to stay healthy, there is no need for hundreds of active relays. In fact, it can work just fine with just a handful, given the fact that new relays can be created and spread through the network easily in case the existing relays start misbehaving. Therefore, the amount of data storage required, in general, is relatively less than Mastodon or similar software.
- Or considering a different outcome: one in which there exist hundreds of niche relays run by amateurs, each relaying updates from a small group of users. The architecture scales just as well: data is sent from users to a single server, and from that server directly to the users who will consume that. It doesn't have to be stored by anyone else. In this situation, it is not a big burden for any single server to process updates from others, and having amateur servers is not a problem.
- **Video and other heavy content**
- It's easy for a relay to reject large content, or to charge for accepting and hosting large content. When information and incentives are clear, it's easy for the market forces to solve the problem.
- **Techniques to trick the user**
- Each client can decide how to best show posts to users, so there is always the option of just consuming what you want in the manner you want — from using an AI to decide the order of the updates you'll see to just reading them in chronological order.
## FAQ
- **This is very simple. Why hasn't anyone done it before?**
I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers. They both fail to see the specific mix of both worlds that Nostr uses.
- **How do I find people to follow?**
First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere. Once you're inside a Nostr social network you'll be able to see them interacting with other people and then you can also start following and interacting with these others.
- **How do I find relays? What happens if I'm not connected to the same relays someone else is?**
You won't be able to communicate with that person. But there are hints on events that can be used so that your client software (or you, manually) knows how to connect to the other person's relay and interact with them. There are other ideas on how to solve this too in the future but we can't ever promise perfect reachability, no protocol can.
- **Can I know how many people are following me?**
No, but you can get some estimates if relays cooperate in an extra-protocol way.
- **What incentive is there for people to run relays?**
The question is misleading. It assumes that relays are free dumb pipes that exist such that people can move data around through them. In this case yes, the incentives would not exist. This in fact could be said of DHT nodes in all other p2p network stacks: what incentive is there for people to run DHT nodes?
- **Nostr enables you to move between server relays or use multiple relays but if these relays are just on AWS or Azure what’s the difference?**
There are literally thousands of VPS providers scattered all around the globe today, there is not only AWS or Azure. AWS or Azure are exactly the providers used by single centralized service providers that need a lot of scale, and even then not just these two. For smaller relay servers any VPS will do the job very well.
-
# "House" dos economistas e o Estado
Falta um gênio pra produzir um seriado tipo House só que com economistas. O House do seriado seria um austríaco é o "everybody lies" seria uma premissa segundo a qual o Estado é sempre a causa de todos os problemas.
Situações bem cabeludas poderiam ser apresentadas de maneira que parecesse muito que a causa era ganância ou o mau-caratismo dos agentes, mas na investigação quase sempre se descobriria que a causa era o Estado.
Parece ridículo, mas se eu descrevesse House assim aqui também pareceria. A execução é que importa.
-
# notes on "Economic Action Beyond the Extent of the Market", Per Bylund
Source: <https://www.youtube.com/watch?v=7St6pCipCB0>
Markets work by dividing labour, but that's not as easy as it seems in the Adam Smith's example of a pin factory, because
1. a pin factory is not a market, so there is some guidance and orientation, some sort of central planning, inside there that a market doesn't have;
2. it is not clear how exactly the production process will be divided, it is not obvious as in "you cut the thread, I plug the head".
Dividing the labour may produce efficiency, but it also makes each independent worker in the process more fragile, as they become dependent on the others.
This is partially solved by having a lot of different workers, so you do not depend on only one.
If you have many, however, they must agree on where one part of the production process starts and where it ends, otherwise one's outputs will not necessarily coincide with other's inputs, and everything is more-or-less broken.
That means some level of standardization is needed. And indeed the market has constant incentives to standardization.
The statist economist discourse about standardization is that only when the government comes with a law that creates some sort of standardization then economic development can flourish, but in fact the market creates standardization all the time. Some examples of standardization include:
* programming languages, operating systems, internet protocols, CPU architectures;
* plates, forks, knifes, glasses, tables, chairs, beds, mattresses, bathrooms;
* building with concrete, brick and mortar;
* money;
* musical instruments;
* light bulbs;
* CD, DVD, VHS formats and others alike;
* services that go into every production process, like lunch services, restaurants, bakeries, cleaning services, security services, secretaries, attendants, porters;
* multipurpose steel bars;
* practically any tool that normal people use and require a little experience to get going, like a drilling machine or a sanding machine; etc.
Of course it is not that you find standardization in all places. Specially when the market is smaller or new, standardization may have not arrived.
There remains the truth, however, that division of labour has the potential of doing good.
More than that: every time there are more than one worker doing the same job in the same place of a division of labour chain, there's incentive to create a new subdivision of labour.
From the fact that there are at least more than one person doing the same job as another in our society we must conclude that someone must come up with an insight about an efficient way to divide the labour between these workers (and probably actually implement it), that hasn't happened for all kinds of jobs.
But to come up with division of labour outside of a factory, some market actors must come up with a way of dividing the labour, actually, determining where will one labour stop and other start (and that almost always needs some adjustments and in fact extra labour to hit the tips), and also these actors must bear the uncertainty and fragility that division of labour brings when there are not a lot of different workers and standardization and all that.
In fact, when an entrepreneur comes with a radical new service to the market, a service that does not fit in the current standard of division of labour, he must explain to his potential buyers what is the service and how the buyer can benefit from it and what he will have to do to adapt its current production process to bear with that new service. That's has happened not long ago with
* services that take food orders from the internet and relay these to the restaurants;
* hostels for cheap accommodation for young travellers;
* Uber, Airbnb, services that take orders and bring homemade food from homes to consumers and similars;
* all kinds of software-as-a-service;
* electronic monitoring service for power generators;
* mining planning and mining planning software; and many other industry-specific services.
## See also
* [Profits, not wages, as the originary factor](nostr:naddr1qqyrge3hxa3rqce4qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c7x67pu)
* [Per Bylund's insight](nostr:naddr1qqyxvdtzxscxzcenqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cuq3unj)
-
# litepub
A Go library that abstracts all the burdensome ActivityPub things and provides just the right amount of helpers necessary to integrate an existing website into the "fediverse" (what an odious name). Made for the [gravity]() integration.
- <https://godoc.org/github.com/fiatjaf/litepub>
## See also
-
-
# Ripple and the problem of the decentralized commit
This is about [Ryan Fugger's Ripple](nostr:naddr1qqyxgenyxe3rzvf4qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c8pp8zu).
The summary is: unless everybody is good and well-connected at all times a transaction can always be left in a half-committed state, which creates confusion, erodes trust and benefits no one.
If you're unconvinced consider the following protocol flow:
1. A finds a route (A--B--C--D) between her and D somehow;
2. A "prepares" a payment to B, tells B to do the same with C and so on (to prepare means to give B a conditional IOU that will be valid as long as the full payment completes);
3. When the chain of prepared messages reaches D, D somehow "commits" the payment.
4. After the commit, A now _really does_ owe B and so on, and D _really_ knows it has been effectively paid by A (in the form of debt from C) so it can ship goods to A.
The most obvious (but wrong) way of structuring this would be for the entire payment chain to be dependent on the reveal of some secret. For example, the "prepare" messages could contain something like "I will pay you as long as you know `p` such that `sha256(p) == h`".
The payment flow then starts with D presenting A with an invoice that contains `h`, so D knows `p`, but no one else knows. A can then send the "prepare" message to B and B do the same until it reaches D.
When it reaches D, D can be sure that C will pay him because he knows `p` such that `sha256(p) == h`. He then reveals `p` to C, C now reveals it to B and B to A. When A gets it it has a proof that D has received his payment, therefore it is happy to settle it later with B and can prove to an external arbitrator that he has indeed paid D in case D doesn't deliver his products.
## Issues with the naïve flow above
### What if D never reveals `p` to C?
Then no one knows what happened. And then 10 years later he arrives at C's house (remember they are friends or have a trust relationship somehow) and demands his payment, and shows `p` to her in a piece of paper. Or worse: go directly to the court and shows C's message that says "I will pay you as long as you know `p` such that `sha256(p) == h`" (but with an actual number instead of "h") and the corresponding `p`. Now the judge has to decide in favor of D.
Now C was supposed to do the same with B, but C is not playing with this anymore, has lost all contact with B after they did their final settlement many years ago, no one was expecting this.
This clearly can't work. There must be a timeout for these payments.
### What if we have a timeout?
Now what if we say the payment expires in one hour. D cannot hold the payment hostage and reveal `p` after 10 years. It must either reveal it before the timeout or conditional IOU will be void. Solves everything!
Except no, now it's the time we reach the most dark void of the protocol, the flaw that sucks its life into the abyss: subjectivity and ambiguity.
The big issue is that we don't have an independent judge to assert, for example, that D has indeed "revealed" `p` to C in time. C must acknowledge that voluntarily. C could do it using messages over the internet, but these messages are not reliable. C is not reliable. Clocks are not synchronized. Also if we now require C to confirm it has received `p` from D then the "prepare" message means nothing, as for D now just knowing `p` is not enough to claim before an arbitrator that C owes her -- because, again, D also must prove it has shown `p` to C before the timeout, therefore it needs a new signed acknowledgement from C, or from some other party.
Let's see a few examples.
### Subjectivity and perverse incentives
D could send `p` to C, and C acknowledge it, but then when C goes to B and send it B will not acknowledge it, and claim it's past the time. Now C loses money.
Maybe C can not acknowledge it received anything from D before checking first with B? But B will have to check with A too! And it subverts the entire flow of the thing. And now A has a "proof of payment" (knowledge of `p`) without even having to acknowledge anything! In this case knowing `p` or not becomes meaningless as everybody knows `p` without acknowleding it to anyone else.
But even if A is honest and sends an "acknowledge" message to B, now B can just sit quiet and enjoy the credit it has just earned from A without ever acknowleding anything to C. It's perverted incentives in every step.
### Ambiguity
But isn't this a protocol based on trust?, you ask, isn't C trusting that B will behave honestly already? Therefore if B is dishonest C just has to acknowledge his loss and break his chain of trust with B.
No, because C will not know what happened. B can say "I could have sent you an acknowledgement, but was waiting for A, and A didn't send anything" and C won't ever know if that was true. Or B could say "what? You didn't send me `p` at all", and that could be true. B could have been offline when A sent it, there could have been a broken connection or many other things, and B continues: "I was waiting for you to present me with `p`, but you didn't, therefore the payment timed out, you can't come here with `p` now, because now A won't accept it anymore from me". That could be true or could be false, who knows?
Therefore it is impossible for trust relationships and reputations to be maintained in such a system without "good fences".[^ln-solution][^ln-issue]
[^ln-solution]: The [Lightning Network has a solution](nostr:naddr1qqyx2vekxg6rsvejqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ccs2twc) for the problem of the decentralized commit.
[^ln-issue]: Ironically this same ambiguity problem [is being faced by the Lightning Network community](https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002826.html) when trying to create a reputation/payment system to prevent routing abuses. It seems simple when you first think about it: "let each node manage its own trust", but in fact it is somewhat impossible.
-
# Mises' interest rate theory
Inspired by [Bob Murphy's thesis](http://consultingbyrpm.com/uploads/Dissertation.pdf) against the "pure time preference theory" (see also this [series of podcasts](https://www.bobmurphyshow.com/episodes/ep-31-capital-and-interest-in-the-austrian-tradition-part-3-of-3/)) -- or blatantly copying it -- here are some thoughts on Mises' most wrong take:
* Mises asserts that the market rate of interest is _not_ the originary rate of interest, because the market rate involves entrepreneurial decisions, risk, uncertainty etc. No one lends money with 100% guarantee that it will be paid back in the market and so. But if that is true, where can we see that originary interest? We're supposed to account for its existence and be sure that it is logically there in every trade between present and future, because it's a _category of action_. But then it seems odd to me that it has anything to do with the actual interest.
* Mises criticizes the notion of "profit" from classical economists because it mashed together gains deriving from speculation, risk, other stuff and originary interest -- but that's only because he assumes originary interest as a given (because it's a category of action and so on). If he didn't he could have just not cited originary interest in the list of things that give rise to "profit" and all would be fine.
* Mixing the two points above, it seems very odd to think that we should look for interest as a component of profit. It seems indeed to be very classifical-economist take. It would be still compatible with Mises'sworldview -- indeed more compatible -- that we looked for profit as a component of interest: when someone lends some 100 and is paid 110 that is profit. Plain simple. Why he did that and why the other person paid isn't for the economist to analyse, or to dissect the extra 10 into 9 interest, 1 risk remuneration or anything like that. If the borrower hadn't paid it would be a 100 loss or a 109 loss?
* In other moments, Mises talks about the originary rate of interest being the same for all things: apples and bicycles and anything else. But wasn't each person supposed to have its own valuation of each good -- including goods in the present and in the future? Is Mises going to say that it's impossible for someone to value an orange in the future more than a bycicle in the future in comparison with these same goods in the present? (The very "more" in the previous sentence shows us that Mises was incurring in cardinal value calculations when coming up with this theory -- and I hadn't noticed it until after I finished typing the phrase.) In other words: what if someone prefers orange, bycicle, bycicle in the future, orange in the future? That doesn't seem to fit. What is the rate of interest?
* Also, on the point above, what if someone has different rates of interest for goods in different timeframes? For example, someone may prefer a bycicle now a little more than a bycicle tomorrow, but very very much more than a bycicle in two days. That also breaks the notion of "originary interest" as an universal rate.
* Now maybe I misunderstood everything, maybe Mises was talking about originary interest as a rate defined by the market. And he clearly says that. That if the rate of interest is bigger on some market entrepreneurs will invest capital in that one until it equalizes with rates in other markets. But all that fits better with the plain notion of profit than with this poorly-crafted notion of originary interest. If you're up to defining and (Mises forbid?) measuring the neutral rate of interest you'll have to arbitrarily choose some businesses to be part of the "market" while excluding others.
* By the way, wasn't originary interest a category of action? How can a category of action be defined and ultimately _fixed_ by entrepreneurial action in a market?
-
# Thafne venceu o Soletrando 2008.
As palavras que Thafne teve que soletrar: "ocioso", "hermético", "glossário", "argênteo", "morfossintaxe", "infra-hepático", "hagiológio". Enquanto isso Eder recebia: "intramuscular", "destilação", "inabitável", "subcutâneo", "homogeneidade", "predecessor", "displicência", "subconsciência", "psicroestesia" (isto segundo [o site da folha][0], donde certamente faltam algumas palavras de Thafne). Sério, "argênteo"? Não é errado dizer que a Globo tentou promover o menino pobre da escola pública do sertão contra a riquinha de Curitiba.
O mais espetacular disto é que deu errado e o Brasil inteiro torceu pela Thafne, o que se verifica com uma simples busca no Google. Eis aqui alguns exemplos:
* [O problema de Thafne][1] traz comentários tentando incriminar o governo do Estado de Minas Gerais com a vitória forçada de Eder.
* [este vídeo mostrando os erros do programa e a vitória triunfal, embora parcial, de Thafne,][2] traz a brilhante descrição "globo de puleira quis complicar a vida da menina!!!!!!!!!!!!!!!!!!!!!!"
* [este vídeo, com o mesmo conteúdo,][3], porém chamado "Thafne versus Luciano Huck, o confronto do século", tem, além disto, vários comentários de francos torcedores de Thafne:
* "Nossa isso é burrice porq o doutor falou duas vezes como o luciano não prestou atenção logo thafine deu duas patadas no luciano... Proxima luciano presta atenção na pronuncia"
* "ele nao pronunciou errado porque é burro, isso foi pra manipular o resultado"
* "Gabriel o Bostador ficou pianinho. Babaca do krl"
* "Pena que ela perdeu :("
* "verdade... ela que ganhou, o outro só ficou com o título :S"
* "A menina deu um banho nesse que além de idiota é BURRO."
* e muitos, muitos outros.
* [Globo Erra e Luciano Huck dá Vexame][4], um breve artigo descrevendo alguns dos pontos em que Eder foi favorecido.
* [esta comunidade do Orkut][5], apenas a maior dentre várias que foram criadas.
O movimento de apoio a Thafne é um exemplo entre poucos de união total da nação em prol de uma causa.
[0]: <http://www1.folha.uol.com.br/ilustrada/2008/05/407470-aluno-de-escola-publica-de-minas-vence-soletrando-huck-da-vexame.shtml>
[1]: <https://misenews.wordpress.com/2012/06/22/o-problema-de-thafne/>
[2]: <https://www.youtube.com/watch?v=lNW_QAiptsY>
[3]: <https://www.youtube.com/watch?v=Va8XxXgnY-c>
[4]: <https://www.putsgrilo.com.br/televisao/soletrando-globo-erra-e-luciano-huck-da-vexame/>
[5]: <http://orkut.google.com/c54999457.html>
-
# Democracy as a failed open-network protocol
In the context of protocols for peer-to-peer open computer networks -- those in which new actors can freely enter and immediately start participating in the protocol --, without any central entity, specially without any central human mind judging things from the top --, it's common for decisions about the protocol to be thought taking in consideration all the possible ways a rogue peer can disrupt the entire network, abuse it, make the experience terrible for others. The protocol design must account for all incentives in play and how they will affect each participant, always having in mind that each participant may be acting in a purely egoistical self-interested manner, not caring at all about the health of the network (even though most participants won't be like that). So a protocol, to be successful, must have incentives aligned such that self-interested actors cannot profit by hurting others and will gain most by cooperating (whatever that means in the envisaged context), or there must be a way for other peers to detect attacks and other kinds of harm or attempted harm and neutralize these.
Since computers are very fast, protocols can be designed to be executed many times per day by peers involved, and since the internet is a very open place to which people of various natures are connected, many open-network protocols with varied goals have been tried in large scale and most of them failed and were shut down (or kept existing, but offering a bad experience and in a much more limited scope than they were expected to be). Often the failure of a protocol leads to knowledge about its shortcomings being more-or-less widespread and agreed upon, and these lead to the development of a better protocol the next time something with similar goals is tried.
Ideally **democracies** are supposed to be an open-entry network in the same sense as these computer networks, and although that is a noble goal, it's one full of shortcomings. Democracies are supposed to the governing protocol of States that have the power to do basically anything with the lives of millions of citizens.
One simple inference we may take from the history of computer peer-to-peer protocols is that the ones that work better are those that are simple and small in scope (**Bitcoin**, for example, is very simple; **BitTorrent** is also very simple and very limited in what it tries to do and the number of participants that get involved in each run of the protocol).
Democracies, as we said above, are the opposite of that. Besides being in a very hard position to achieve success as an open protocol, democracies also suffer from the fact that they take a long time to run, so it's hard to see where it is failing every time.
The fundamental incentives of democracy, i.e. the rules of the protocol, posed by the separation of powers and checks-and-balances are basically the same in every place and in every epoch since the XIII century, and even today most people who dedicate their lives to the subject still don't see how [they're completely flawed](nostr:naddr1qqyrzc3nvenxxdpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cjc4mf0).
The system of checks and balances was thought from the armchair of a couple of political theorists who had never done anything like that in their lives, didn't have any experience dealing with very adversarial environments like the internet -- and probably couldn't even imagine that the future users of their network were going to be creatures completely different than themselves and their fellow philosophers and aristocrats who all shared the same worldview (and how fast that future would come!).
## Also
* [The illusion of checks and balances](nostr:naddr1qqyrzc3nvenxxdpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cjc4mf0)
-
# Bolo
It seems that from 1987 to around 2000 there was a big community of people who played this game called ["Bolo"][wikipedia]. It was a game in which people controlled a tank and killed others while trying to capture bases in team matches. Always 2 teams, from 2 to 16 total players, games could last from 10 minutes to 12 hours. I'm still trying to understand all this.
The game looks silly from some [videos][videos] you can find today, but apparently it was very deep in strategy because people developed [strategy][strategy-guide] [guides][pillbox-guide] and [wrote][letter-1] [extensively][letter-2] about it and [Netscape even supported `bolo:` URLs][clickable-links] out of the box.
> The two most important elements on the map are pillboxes and bases. Pillboxes are originally neutral, meaning that they shoot at every tank that happens to get in its range. They shoot fast and with deadly accuracy. You can shoot the pillbox with your tank, and you can see how damaged it is by looking at it. Once the pillbox is subdued, you may run over it, which will pick it up. You may place the pillbox where you want to put it (where it is clear), if you've enough trees to build it back up.
> Trees are harvested by sending your man outside your tank to forest the trees. Your man (also called a builder) can also lay mines, build roads, and build walls. Once you have placed a pillbox, it will not shoot at you, but only your enemies. Therefore, pillboxes are often used to protect your bases.
That quote was taken from this ["augmented FAQ"][augmented-faq] written by some user. Apparently there were many FAQs for this game. A FAQ is after all just a simple, clear and direct to the point way of writing about anything, previously known as [summa][summa][^summa-k], it doesn't have to be related to any actually frequently asked question.
More unexpected Bolo writings include [an etiquette guide][etiquette], an [anthropology study][anthropology] and [some wonderings on the reverse pill war tactic][reverse-pill].
[videos]: https://www.youtube.com/results?search_query=winbolo
[wikipedia]: https://en.wikipedia.org/wiki/Bolo_(1987_video_game)
[clickable-links]: http://web.archive.org/web/19980214020327/http://boloweb.stanford.edu/BoloWeb.html
[faq]: https://web.archive.org/web/20070518233700/http://bishop.mc.duke.edu/bolo/guides/stuartfaq.html
[letter-1]: http://web.archive.org/web/19961214060949/http://rost.abo.fi/~gpappas/Bolo/News/mikael-bl.html
[letter-2]: http://web.archive.org/web/19961214060959/http://rost.abo.fi/~gpappas/Bolo/News/mikael-kev.html
[strategy-guide]: http://web.archive.org/web/19961214052754/http://rost.abo.fi/~gpappas/Bolo/guide.html
[etiquette]: http://web.archive.org/web/19961214052805/http://rost.abo.fi/~gpappas/Bolo/etiquette.html
[pillbox-guide]: http://web.archive.org/web/19961214052818/http://rost.abo.fi/~gpappas/Bolo/PBguide/pbguide1.html
[anthropology]: http://web.archive.org/web/19961214052830/http://rost.abo.fi/~gpappas/Bolo/anthropology.html
[reverse-pill]: http://web.archive.org/web/19990428023004/http://powered.cs.yale.edu:8000/%7Ebayliss/bolo.html
[augmented-faq]: http://web.archive.org/web/19970118071637/http://rost.abo.fi/~gpappas/Bolo/faq.html
[summa]: https://en.wikipedia.org/wiki/Summa
[^summa-k]: It's not the same thing, but I couldn't help but notice the similarity.
-
# Lagoa Santa: como chegar -- partindo da rodoviária de Belo Horizonte
Ao descer de seu ônibus na rodoviária de Belo Horizonte às 4 e pouco da manhã, darás de frente para um caubói que toma cerveja em seus trajes típicos em um bar no setor mesmo de desembarque. Suba a escada à direita que dá no estacionamento da rodoviária. Vire à esquerda e caminhe por mais ou menos 400 metros, atravessando uma área onde pessoas suspeitas -- mas provavelmente dormindo em pé -- lhe observam, e então uma pracinha ocupada por um clã de mendigos. Ao avistar um enorme obelisco no meio de um cruzamento de duas avenidas, vire à esquerda e caminhe por mais 400 metros. Você verá uma enorme, antiga e bela estação com uma praça em frente, com belas fontes aqüáticas. Corra dali e dirija-se a um pedaço de rua à direita dessa praça. Um velho palco de antigos carnavais estará colocado mais ou menos no meio da simpática ruazinha de parelepípedos: é onde você pegará seu próximo ônibus.
Para entrar na estação é necessário ter um cartão com créditos recarregáveis. Um viajante prudente deixa sempre um pouco de créditos em seu cartão a fim de evitar filas e outros problemas de indisponibilidade quando chega cansado de viagem, com pressa ou em horários incomuns. Esse tipo de pessoa perceberá que foi totalmente ludibriado ao perceber que que os créditos do seu cartão, abastecido quando de sua última vinda a Belo Horizonte, há três meses, pereceram de prazo de validade e foram absorvidos pelos cofre públicos. Terá, portanto, que comprar mais créditos. O guichê onde os cartões são abastecidos abre às 5h, mas não se espante caso ele não tenha sido aberto ainda quando o primeiro ônibus chegar, às 5h10.
Com alguma sorte, um jovem de moletom, autorizado por dois ou três fiscais do sistema de ônibus que conversam alegremente, será o operador da catraca. Ele deixa entrar sem pagar os bêbados, os malandros, os pivetes. Bastante empático e perceptivo do desespero dos outros, esse bom rapaz provavelmente também lhe deixará entrar sem pagar.
Uma vez dentro do ônibus, não se intimide com os gritalhões e valentões que, ofendidíssimos com o motorista por ele ter parado nas estações, depois dos ônibus anteriores terem ignorado esses excelsos passageiros que nelas aguardavam, vão aos berros tirar satisfação.
O ponto final do ônibus, 40 minutos depois, é o terminal Morro Alto. Lá você verá, se procurar bem entre vários ônibus e pessoas que despertam a sua mais honesta suspeita, um veículo escuro, apagado, numerado **5882** e que abrigará em seu interior um motorista e um cobrador que descansam o sono dos justos.
Aguarde na porta por mais uns vinte minutos até que, repentinamente desperto, o motorista ligue o ônibus, abra as portas e já comece, de leve, a arrancar. Entre correndo, mas espere mais um tempo, enquanto as pessoas que têm o cartão carregado passem e peguem os melhores lugares, até que o cobrador acorde e resolva te cobrar a passagem nesse velho meio de pagamento, outrora o mais líqüído, o dinheiro.
Este último ônibus deverá levar-lhe, enfim, a Lagoa Santa.
-
# Liquidificador
A fragilidade da comunicação humana fica clara quando alguém liga o liquidificador.
-
# An argument according to which fractional-reserve banking is merely theft and nothing else
Fractional-reserve banking isn't anything else besides transfer of money from the people at large to bankers.
It has been argued that fractional-reserve banking serves a purpose in making new funds available out of no one's pocket for lending and thus directing resources to productive borrowers. This financing method is preferrable to the more conservative way of borrowing funds directly from a saver and then using that to lend to others because it uses new money, money not tied to anyone else before, and thus it's cheaper and involves less friction.
Instead, what happens is that someone must at all times be the owner of each money. So when banks use their power of generating fractional-reserve funds, they are creating new money and they are the owners – at this point, a theft occurs from the public at large to them – and then they proceed to lend their own money. From this description it is clear that the fact that bank customers have previously deposited their own funds in the banks' vaults have no direct relation with the fact that banks created money afterwards, there's only a legal relation and the fact that banks may need cash deposited by its customers to redeem borrowers claims, but even that wouldn't be necessary if banks were allowed to print their own cash.
-
# Thomas Kuhn sequer menciona o "método científico"
O que define uma ciência é o recorte de uma realidade a partir de um paradigma que todos os que entram no campo daquela ciência aceitam como verdade. Pronto.
O "método científico" não é nem necessário nem suficiente (apesar de que ele mesmo precisa pressupor uma série de axiomas para funcionar, do contrário a pessoa ficaria tendo que testar cada sílaba dos seus experimentos e cada um desses testes seria impossível de realizar pois eles necessariamente precisariam de outros conhecimentos prévios etc.).
Por isso
- a física teórica pode ser uma ciência embora não faça experimentos;
- e a economia pode ser uma ciência embora não faça experimentos (tá bom, existem pessoas que insistem em fazer "experimentos" em economia, mas que na verdade são só uma enorme perda de tempo baseada em estatísticas, mas isso não importa, até essas coisas podem ter algum valor desde que não se as entenda como sendo parte de um método científico);
- e a história pode ser uma ciência, por mais estranho que isso pareça, bastando apenas que o historiador junte fatos de antigamente tendo como pressuposto um paradigma, por exemplo, de que existe um sentido da história, ou sei lá (não acho que exista essa ciência da história hoje em dia, me parece que cada historiador está fazendo uma coisa diferente, sem muitos paradigmas além do bom senso);
- e a biologia pode ser uma ciência mesmo consistindo unicamente num longo esforço de classificação, e de fato é hoje, já que vê cada espécie e suas partes como frutos de um processo evolutivo cujas bases constituem um paradigma, e nega outras visões, como a teleologia.
- [Método científico](nostr:naddr1qqyr2wf3vgmx2dmrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823chtnaca)
- [A estrutura paradigmática da ciência](nostr:naddr1qqyrzdfsxyuxye3cqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c4awxqq)
- [Um algoritmo imbecil da evolução](nostr:naddr1qqyryv35xsunwvfnqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c45a0t3)
-
# Drivechain comparison with Ethereum
Ethereum and other "smart contract platforms" capable of running turing-complete code and "developer-friendly" mindset and community have been running for years and they were able to produce a very low number of potentially useful "contracts".
What are these contracts, actually? (Considering Ethereum, but others are similar:) they are sidechains that run inside the Ethereum blockchain (and thus their verification and data storage are forced upon all Ethereum nodes). Users can peg-in to a contract by depositing money on it and peg-out by making a contract operation that sends money to a normal Ethereum address.
Now be generous and imagine these platforms are able to produce 3 really cool, useful ideas (out of many thousands of attempts): [Bitcoin](nostr:naddr1qqyryveexumnyd3kqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c7nywz4) can copy these, turn them into 3 different sidechains, each running fixed, specific, optimized code. Bitcoin users can now opt to use these platforms by transferring coins to it – all that without damaging the nodes or the consensus protocol that has been running for years, and without forcing anyone to be aware of these chains.
The process of turning a useful idea into a sidechain doesn't come spontaneously, and can't be done by a single company (like often happens in Ethereum-land), it must be acknowledge by a rough consensus in the Bitcoin community that that specific sidechain with that specific design is a desirable thing, and ultimately approved by miners, as they're the ones that are going to be in charge of that.
-
# contratos.alhur.es
A website that allowed people to fill a form and get a standard _Contrato de Locação_.
Better than all the other "templates" that float around the internet, which are badly formatted `.doc` files.
It was fully programmable so other templates could be added later, but I never did.
This website made maybe one dollar in Google Ads (and Google has probably stolen these like so many other dollars they did with their bizarre requirements).
- <https://github.com/fiatjaf/contratos>
- <http://contratos.alhur.es>
-
# Lightning and its fake HTLCs
Lightning is terrible but can be very good with two tweaks.
## How Lightning would work without HTLCs
In a world in which HTLCs didn't exist, Lightning channels would consist only of balances. Each commitment transaction would have two outputs: one for peer `A`, the other for peer `B`, according to the current state of the channel.
When a payment was being attempted to go through the channel, peers would just trust each other to update the state when necessary. For example:
1. Channel `AB`'s balances are `A[10:10]B` (in sats);
2. `A` sends a 3sat payment through `B` to `C`;
3. `A` asks `B` to route the payment. Channel `AB` doesn't change at all;
4. `B` sends the payment to `C`, `C` accepts it;
5. Channel `BC` changes from `B[20:5]C` to `B[17:8]C`;
6. `B` notifies `A` the payment was successful, `A` acknowledges that;
7. Channel `AB` changes from `A[10:10]B` to `A[7:13]B`.
This in the case of a success, everything is fine, no glitches, no dishonesty.
But notice that `A` could have refused to acknowledge that the payment went through, either because of a bug, or because it went offline forever, or because it is malicious. Then the channel `AB` would stay as `A[10:10]B` and `B` would have lost 3 satoshis.
## How Lightning would work with HTLCs
HTLCs are introduced to remedy that situation. Now instead of commitment transactions having always only two outputs, one to each peer, now they can have HTLC outputs too. These HTLC outputs could go to either side dependending on the circumstance.
Specifically, the peer that is sending the payment can redeem the HTLC after a number of blocks have passed. The peer that is receiving the payment can redeem the HTLC if they are able to provide the preimage to the hash specified in the HTLC.
Now the flow is something like this:
1. Channel `AB`'s balances are `A[10:10]B`;
2. `A` sends a 3sat payment through `B` to `C`:
3. `A` asks `B` to route the payment. Their channel changes to `A[7:3:10]B` (the middle number is the HTLC).
4. `B` offers a payment to `C`. Their channel changes from `B[20:5]C` to `B[17:3:5]C`.
5. `C` tells `B` the preimage for that HTLC. Their channel changes from `B[17:3:5]C` to `B[17:8]C`.
6. `B` tells `A` the preimage for that HTLC. Their channel changes from `A[7:3:10]B` to `A[7:13]B`.
Now if `A` wants to trick `B` and stop responding `B` doesn't lose money, because `B` knows the preimage, `B` just needs to publish the commitment transaction `A[7:3:10]B`, which gives him 10sat and then redeem the HTLC using the preimage he got from `C`, which gives him 3 sats more. `B` is fine now.
In the same way, if `B` stops responding for any reason, `A` won't lose the money it put in that HTLC, it can publish the commitment transaction, get 7 back, then redeem the HTLC after the certain number of blocks have passed and get the other 3 sats back.
## How Lightning doesn't really work
The example above about how the HTLCs work is very elegant but has a fatal flaw on it: transaction fees. Each new HTLC added increases the size of the commitment transaction and it requires yet another transaction to be redeemed. If we consider fees of 10000 satoshis that means any HTLC below that is as if it didn't existed because we can't ever redeem it anyway. In fact the Lightning protocol explicitly dictates that if HTLC output amounts are below the fee necessary to redeem them they shouldn't be created.
What happens in these cases then? Nothing, the amounts that should be in HTLCs are moved to the commitment transaction miner fee instead.
So considering a transaction fee of 10000sat for these HTLCs if one is sending Lightning payments below 10000sat that means they operate according to the _unsafe protocol_ described in the first section above.
It is actually worse, because consider what happens in the case a channel in the middle of a route has a glitch or one of the peers is unresponsive. The other node, thinking they are operating in the _trustless protocol_, will proceed to publish the commitment transaction, i.e. close the channel, so they can redeem the HTLC -- only then they find out they are actually in the _unsafe protocol_ realm and there is no HTLC to be redeemed at all and they lose not only the money, but also the channel (which costed a lot of money to open and close, in overall transaction fees).
One of the biggest features of the _trustless protocol_ are the payment proofs. Every payment is identified by a hash and whenever the payee releases the preimage relative to that hash that means the payment was complete. The incentives are in place so all nodes in the path pass the preimage back until it reaches the payer, which can then use it as the proof he has sent the payment and the payee has received it. This feature is also lost in the _unsafe protocol_: if a glitch happens or someone goes offline on the preimage's way back then there is no way the preimage will reach the payer because no HTLCs are published and redeemed on the chain. The payee may have received the money but the payer will not know -- but the payee will lose the money sent anyway.
## The end of HTLCs
So considering the points above you may be sad because in some cases Lightning doesn't use these magic HTLCs that give meaning to it all. But the fact is that no matter what anyone thinks, HTLCs are destined to be used less and less as time passes.
The fact that over time Bitcoin transaction fees tend to rise, and also the fact that multipart payment (MPP) are increasedly being used on Lightning for good, we can expect that soon no HTLC will ever be big enough to be actually worth redeeming and we will be at a point in which not a single HTLC is real and they're all fake.
Another thing to note is that the current _unsafe protocol_ kicks out whenever the HTLC amount is below the Bitcoin transaction fee would be to redeem it, but this is not a reasonable algorithm. It is not reasonable to lose a channel and then pay 10000sat in fees to redeem a 10001sat HTLC. At which point does it become reasonable to do it? Probably in an amount many times above that, so it would be reasonable to even increase the threshold above which real HTLCs are made -- thus making their existence more and more rare.
These are good things, because we don't actually need HTLCs to make a functional Lightning Network.
## We must embrace the _unsafe protocol_ and make it better
So the _unsafe protocol_ is not necessarily very bad, but the way it is being done now is, because it suffers from two big problems:
1. Channels are lost all the time for no reason;
2. No guarantees of the proof-of-payment ever reaching the payer exist.
The first problem we fix by just stopping the current practice of closing channels when there are no real HTLCs in them.
That, however, creates a new problem -- or actually it exarcebates the second: now that we're not closing channels, what do we do with the expired payments in them? These payments should have either been canceled or fulfilled before some block x, now we're in block x+1, our peer has returned from its offline period and one of us will have to lose the money from that payment.
That's fine because it's only 3sat and it's better to just lose 3sat than to lose both the 3sat and the channel anyway, so either one would be happy to eat the loss. Maybe we'll even split it 50/50! No, that doesn't work, because it creates an attack vector with peers becoming unresponsive on purpose on one side of the route and actually failing/fulfilling the payment on the other side and making a profit with that.
So we actually need to know who is to blame on these payments, even if we are not going to act on that imediatelly: we need some kind of arbiter that both peers can trust, such that if one peer is trying to send the preimage or the cancellation to the other and the other is unresponsive, when the unresponsive peer comes back, the arbiter can tell them they are to blame, so they can willfully eat the loss and the channel can continue. Both peers are happy this way.
If the unresponsive peer doesn't accept what the arbiter says then the peer that was operating correctly can assume the unresponsive peer is malicious and close the channel, and then blacklist it and never again open a channel with a peer they know is malicious.
Again, the differences between this scheme and the current Lightning Network are that:
a. In the current Lightning we always close channels, in this scheme we only close channels in case someone is malicious or in other worst case scenarios (the arbiter is unresponsive, for example).
b. In the current Lightning we close the channels without having any clue on who is to blame for that, then we just proceed to reopen a channel with that same peer even in the case they were actively trying to harm us before.
## What is missing? An arbiter.
The Bitcoin blockchain is the ideal arbiter, it works in the best possible way if we follow the _trustless protocol_, but as we've seen we can't use the Bitcoin blockchain because it is expensive.
Therefore we need a new arbiter. That is the hard part, but not unsolvable. Notice that we don't need an absolutely perfect arbiter, anything is better than nothing, really, even an unreliable arbiter that is offline half of the day is better than what we have today, or an arbiter that lies, an arbiter that charges some satoshis for each resolution, anything.
Here are some suggestions:
- random nodes from the network selected by an algorithm that both peers agree to, so they can't cheat by selecting themselves. The only thing these nodes have to do is to store data from one peer, try to retransmit it to the other peer and record the results for some time.
- a set of nodes preselected by the two peers when the channel is being opened -- same as above, but with more handpicked-trust involved.
- some third-party cloud storage or notification provider with guarantees of having open data in it and some public log-keeping, like Twitter, GitHub or a [Nostr](https://github.com/fiatjaf/nostr) relay;
- peers that get paid to do the job, selected by the fact that they own some token (I know this is stepping too close to the shitcoin territory, but could be an idea) issued in a [Spacechain](https://www.youtube.com/watch?v=N2ow4Q34Jeg);
- a Spacechain itself, serving only as the storage for a bunch of `OP_RETURN`s that are published and tracked by these Lightning peers whenever there is an issue (this looks wrong, but could work).
## Key points
1. Lightning with HTLC-based routing was a cool idea, but it wasn't ever really feasible.
2. HTLCs are going to be abandoned and that's the natural course of things.
3. It is actually good that HTLCs are being abandoned, but
4. We must change the protocol to account for the existence of fake HTLCs and thus make the bulk of the Lightning Network usage viable again.
## See also
- [Ripple and the problem of the decentralized commit](nostr:naddr1qqyrxcmzxa3nxv34qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cjrqar6)
- [The Lightning Network solves the problem of the decentralized commit](nostr:naddr1qqyx2vekxg6rsvejqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ccs2twc)
-
# Gold is not useless
If there's something all common people believe about gold is that it is useless[^1]. Austrian economists and libertarians in general that argue against central banks or defend a primitive gold standard are often charged with that accusation: that gold is useless, it has no use in the industry, it serves no purpose besides ornamental, so it is a silly commodity, a luxurious one, and that it would be almost immoral to have such a thing in a so central position in an economy such as the position of money.
I've seen libertarians in general argue such things as: "it is used in some dental operations", which means people make dental prosthesis of gold, something that fits in same category of jewelry, I would say.
There's also the argument of electronic connectors. That's something that appears to be true, but wouldn't suffice the anti-gold arguments. The fact remains that, besides its uses as money -- because gold is still considered to be a form money even now that it doesn't have that position formally in any country (otherwise it wouldn't be considered as an "investment" or "value store" everywhere) -- gold is used mainly for ornamental purposes[^2].
All that is a hassle for libertarians in general. Even the Mises Regression Theory wouldn't solve that problem of people skeptical of gold due to its immoral nature. That problem is solved once you read what is written in the chapter 17 from Richard Cantillon's _Essay on Economic Theory[^3]_ (page 103):
> Gold and silver are capable of serving not only the same purpose as tin and copper, but also most of the purposes of lead and iron. They have this further advantage over other metals in that they are not consumed by fire and are so durable that they may be considered permanent. It is not surprising, therefore, that the men who found the other metals useful, valued gold and silver even before they were used in exchange.
So gold is indeed useful. Everybody should already know that. You can even do forks and spoons with gold. You can do furniture with gold, and many other useful stuff. As soon as you grasp this, gold is useful again. It is an useful commodity.
Answering the next question becomes easy: why isn't anyone making gold forks anywhere? The questioner already knows the answer: because it is too expensive for that.
And now the Regression Theory comes with its full force: why is it expensive? Because it has gained a lot of value in the process of becoming money. The value of gold as money is much greater than as a metal used in fork production.
---
[^1]: see <http://www.salon.com/2014/02/02/ignore_sean_hannity_gold_is_useless_partner/> or all answers on <https://www.quora.com/Why-is-gold-considered-so-precious-and-why-does-it-have-such-high-prices>.
[^2]: this <https://en.wikipedia.org/wiki/Gold#Modern_applications> section on the Wikipedia page for gold is revealing.
[^3]: <https://mises.org/library/essay-economic-theory-0>
-
# Filemap
[filemap](https://filemap.xyz/) solves the problem of sending and receiving files to and from non-tech people when you don't have a text communication channel with them.
[![xkcd file transfer comic](https://imgs.xkcd.com/comics/file_transfer.png)](https://xkcd.com/949/)
Imagine you want to send files to your grandfather, or you don't use Facebook and your younger cousin who only uses Facebook and doesn't know what is email wants to send you some pictures, it's pretty hard to get a file-sharing channel between people if they're not in the same network. If even the people have a way to upload the files to some hosting service and then share the link everything would still work, but you're not going to write `somehostingservice.com/wHr4y7vFGh0` to your grandfather -- or expect your cousin to do that for you and send you an SMS with dozens of those links.
Solution:
* Upload your files to https://filemap.xyz/ (you can either upload directly or share links to things already uploaded -- or even links to pages) and pin them to your grandfather's house address; then tell your grandfather to open https://filemap.xyz/ and look for his address. Done.
* Tell your younger cousin to visit filemap.xyz and upload all the files to his address, later you open the site and look for his address. There are your files.
---
Initially this used [ipfs-dropzone](nostr:naddr1qqyrjvrx8p3xyvesqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cwpfruw), but IPFS is broken, os I migrated to [WebTorrent](https://webtorrent.io/), but that required the file sender to be online hosting its own file and the entire idea of this service was to make something _easy_, so I migrated to Firebase Hosting, which is also terrible and has a broken API, but at least is capable of hosting files. Should have used something like S3.
- <https://github.com/fiatjaf/filemap.xyz>
- <https://filemap.xyz/>
-
# The illusion of checks and balances
The website history.com has [a list of some of the most important "checks and balances"](https://www.history.com/topics/us-government/checks-and-balances) put in place by the United States Constitution. Here are some of them and how they are not real _checks_, they're flawed and easily bypassed by malicious peers that manage to enter the network.
> The president (head of the executive branch) serves as commander in chief of the military forces, but Congress (legislative branch) appropriates funds for the military and votes to declare war.
As it has happened [multiple times](https://en.wikipedia.org/wiki/Declaration_of_war_by_the_United_States#Undeclared_wars), the United States has engaged in many undeclared wars -- and many other military encounters that don't get enough media coverage and weren't even formally acknowledged by the Congress.
> Congress has the power of the purse, as it controls the money used to fund any executive actions.
There's a separate power called Federal Reserve which is more-or-less under the influence of the executive branch that is controlled by a single man and has the power of creating unlimited money. It was softly abused by the executive branch since its creation, but since 2008 it has been increasingly having its scope expanded from just influencing the banking sector to also directly using its money to buy all sorts of things and influence all sorts of markets and other actors.
> Veto power. Once Congress has passed a bill, the president has the power to veto that bill. In turn, Congress can override a regular presidential veto by a two-thirds vote of both houses.
If you imagine that both the executive and the legislative are 100% dedicated to go against each other the president could veto all bills, but then the legislative could enact them all anyway. Congress has the absolute power here (which can be justified by fact that the congress itself is split into multiple voters, but still this "veto" rule seems more like a gimmick to obscure the process than any actual check).
> The Supreme Court and other federal courts (judicial branch) can declare laws or presidential actions unconstitutional, in a process known as judicial review.
This rule gives absolute power to the Supreme Court over any matter. It can use their own personal judgement to veto any bill, cancel any action by the executive, reinterpret any existing law in any manner. There's no check against bad interpretations or judgements, so any absurd thing must be accepted. This should be obvious, and yet the entire system which most people believe to be "checked" is actually dependent on the good will and sanity of the judicial branch.
> In turn, the president checks the judiciary through the power of appointment, which can be used to change the direction of the federal courts
If the president and congress are being attacked by the judicial power, this isn't of much help as its effects are very long term. On the other hand, a president can single-handedly and arbitrarily use this rule to slowly poison the judicial system such that will turn malicious for the rest of the system after some time.
> By passing amendments to the Constitution, Congress can effectively check the decisions of the Supreme Court.
What is written in the Constitution can be easily ignored or misread by the members of the Supreme Court without any way for these interpretations to be checked or reverted. Basically the Supreme Court has absolute power over all things if we consider this.
> Congress (considered the branch of government closest to the people) can impeach both members of the executive and judicial branches.
Again (like in the presidential veto rule), this gives the congress unlimited power. There are no checks here -- except of course the fact that the congress is composed by multiple different voting heads of which a majority has to agree for the congress to do anything, which is the only thing preventing overabuse of this rule.
---
As shown above, most rules that compose the "checks and balances" system can be abused and if given enough time they will. They aren't real checks.
Ultimately, the stability and decency of a [democracy](nostr:naddr1qqyrxvtxxf3nse3sqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ccyra4y) relies on the majority rule (so congress votes are never concentrated in dictatorial measures) and the common sense of the powerful people (president and judges).
There probably hasn't been a single year in any democracy in which one of these powers didn't abused or violated one of the rules, but still in most cases the overall system stays in place because of the general culture, splitted views about most issues, overall common sense and fear of public shame.
The checks and balances system itself is an illusion. All the complex "democracy" construct depends on the goodwill of all the participants and have only worked so far (when it did) by miracle and by the power of human cooperation and love.
-
# jiq
When someone created [`jiq`](https://github.com/simeji/jid) claiming it had "jq queries" I went to inspect and realized it didn't, it just had a poor simple JSON query language that implemented 1% of all [`jq`](https://stedolan.github.io/jq/manual/) features, so I forked it and plugged `jq` directly into it, and renamed to `jiq`.
After some comments on issues in the original repository from people complaining about lack of `jq` compatibility it got a ton of unexpected users, was even packaged to ArchLinux.
![](https://raw.githubusercontent.com/fiatjaf/jiq/master/screencast.gif)
- <https://github.com/fiatjaf/jiq>
-
# Boardthreads
This was a very badly done service for turning a Trello list into a helpdesk UI.
Surprisingly, it had more paying users than [Websites For Trello](nostr:naddr1qqyrydpkvverwvehqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c9d4yku), which I was working on simultaneously and dedicating much more time to it.
The Neo4j database I used for this was a very poor choice, it was probably the cause of all the bugs.
![screenshot](https://archive.is/g4wvY/3a6e3164a012c8f37e6d69ffbfcf4b62fd497d43/scr.png)
-<https://github.com/fiatjaf/boardthreads>
-
# Who will build the roads?
Who will build the roads? Em Lagoa Santa, as mais novas e melhores ruas -- que na verdade acabam por formar enormes teias de bairros que se interligam -- são construídas pelos loteadores que querem as ruas para que seus lotes valham mais -- e querem que outras pessoas usem as ruas também. Também são esses mesmos loteadores que colocam os postes de luz e os encanamentos de água, não sem antes terem que se submeter a extorsões de praxe praticadas por COPASA e CEMIG.
Se ao abrir um loteamento, condomínio, prédio um indivíduo ou uma empresa consegue sem muito problema passar rua, eletricidade, água e esgoto, por que não seria possível existir livre-concorrência nesses mercados? Mesmo aquela velha estória de que é ineficiente passar cabos de luz duplicados para que companhias elétricas possam competir já me parece bobagem.
-
# Como não houve resposta, estou enviando de novo
Recebi um email assim, dizendo a mesma coisa repetida. Eu havia recebido já da primeira vez, mas como era só uma informação já esperada, julguei que não precisava responder dizendo "chegou, obrigado!" e não o fiz.
Reconheço, porém, que dada a instabilidade desses serviços de email nunca ninguém sabe se a mensagem chegou ou não. Ela pode ter sido jogada na lixeira do spam, ou pode ter falhado por outros motivos, e aí não existe um jeito garantido de saber se houve falha, é um enorme problema sempre. Por isso a necessidade de uma resposta "chegou, obrigado!".
Mas não podemos parar por aí. A resposta "chegou, obrigado!" também está sujeita aos mesmos trâmites e riscos da mensagem original. Seria necessário, porém, que assim que a outra pessoa recebesse o "chegou, obrigado!" deveria então responder com um "recebi a sua confirmação". Caso não o fizesse, eu poderia achar que a minha mensagem não havia chegado e dias depois enviá-la de novo: "como não houve resposta à minha confirmação, estou enviando de novo".
E assim por diante (eu ia escrever mais um parágrafo só pelo drama, mas desisti. Já deu pra entender).
---
- [Ripple and the problem of the decentralized commit](nostr:naddr1qqyrxcmzxa3nxv34qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cjrqar6), esta situação que acabo de viver é mais um exemplo prático disto.
-
# Multi-service Graph Reputation protocol
## The problem
1. Users inside centralized services need to know reputations of other users they're interacting with;
2. Building reputation with ratings imposes a big burden on the user and still accomplishes nothing, can be faked, no one cares about these ratings etc.
## The ideal solution
Subjective reputation: reputation based on how you rated that person previously, and how other people you trust rated that person, and how other people trusted by people you trust rated that person and so on, in a web-of-trust that actually can give you some insight on the trustworthiness of someone you never met or interacted with.
## The problem with the ideal solution
1. Most of the times the service that wants to implement this is not as big as Facebook, so it won't have enough people in it for such graphs of reputation to be constructed.
2. It is not trivial to build.
## My proposed solution:
I've drafted a protocol for an open system based on services publishing their internal reputation records and indexers using these to build graphs, and then serving the graphs back to the services so they can show them to users when it is needed (as HTTP APIs that can be called directly from the user client app or browser).
Crucially, these indexers will gather data from multiple services and cross-link users from these services so the graph is better.
<https://github.com/fiatjaf/multi-service-reputation-rfc>
The first and single actionable and useful feedback I got, from [@bootstrapbandit](https://twitter.com/bootstrapbandit) was that services shouldn't share email addresses in plain text (email addresses and other external relationships users of a service may have are necessary to establish links from users accross services), but I think it is ok if services publish hashes of these email addresses instead. At some point I will update the spec draft and that may have been before the time you're reading this.
Another issue is that services may lie about their reputation records and that will hurt other services and users in these other services that are relying on that data. Maybe indexers will have to do some investigative job here to assert service honesty. Or maybe this entire protocol is just failed and we will actually need a system in which users themselves will publish their own records.
## See also
* [P2P reputation thing](nostr:naddr1qqyrqv3cxumnydfsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cnjc88q)
* [idea: Graph subjective reputation as a service](nostr:naddr1qqyrjdehxymrsdpkqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cal60d8)
* <https://github.com/jangerritharms/reputation_systems>
-
# Flowi.es
At the time I thought [Workflowy][workflowy] had the ideal UI for everything. I wanted to implement my [custom app maker](nostr:naddr1qqyxgcejv5unzd33qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cz3va32) on it, but ended up doing this: a platform for enhancing Workflowy with extra features:
- An email reminder based on dates input in items
- A website generator, similar to [Websites For Trello](nostr:naddr1qqyrydpkvverwvehqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c9d4yku), also based on [Classless Templates](nostr:naddr1qqyxyv35vymk2vfsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cqwgdau)
Also, I didn't remember this was also based on CouchDB and had some _couchapp_ functionalities.
![screenshot](https://camo.githubusercontent.com/d3f904a4b01eb613796ace0c33ca101b2fea8199/68747470733a2f2f617263686976652e69732f76414938352f396539323735353334373761643235633364643666343766626635313636666163666534366162632f7363722e706e67)
- <https://flowi.es>
- <https://github.com/fiatjaf/flowies>
[workflowy]: <https://workflowy.com/>
-
# A memória está nas coisas
A memória está nas coisas, mas se você tiver muitas memórias já mesma coisa parece que uma se sobrepõe à outra, de modo que se vive quiser acumular mais memórias é melhor mudar de casa todo ano.
-
# Cultura Inglesa e aprendizado extra-escolar
Em 2005 a Cultura Inglesa me classificou como nível 2 em proficiência de inglês, numa escala de 1 a 14 ou coisa parecida. De modo que eu precisaria de 6 anos de aulas com eles pra ficar bom. 2 anos depois, sem fazer nenhuma aula ou ter qualquer tipo de treinamento intensivo eu era capaz de compreender textos técnicos em inglês sem nenhuma dificuldade. Mais 2 anos e eu era capaz de compreender qualquer coisa e me expressar com razoável qualidade.
Tudo isso pra documentar mais um exemplo, que poderia passar despercebido, de aprendizado de tipo escolar que se deu fora de uma escola.
-
# On HTLCs and arbiters
This is another attempt and conveying the same information that should be in [Lightning and its fake HTLCs](nostr:naddr1qqyryefsxqcxgdmzqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cp0m63a). It assumes you know everything about Lightning and will just highlight a point. This is also valid for PTLCs.
The protocol says HTLCs are trimmed (i.e., not actually added to the commitment transaction) when the cost of redeeming them in fees would be greater than their actual value.
Although this is often dismissed as a non-important fact (often people will say "it's trusted for small payments, no big deal"), but I think it is indeed very important for 3 reasons:
1. Lightning absolutely relies on HTLCs actually existing because the payment proof requires them. The entire security of each payment comes from the fact that the payer has a preimage that comes from the payee. Without that, the state of the payment becomes an unsolvable mystery. The inexistence of an HTLC breaks the atomicity between the payment going through and the payer receiving a proof.
2. Bitcoin fees are expected to grow with time (arguably the reason Lightning exists in the first place).
3. MPP makes payment sizes shrink, therefore more and more of Lightning payments are to be trimmed. As I write this, the mempool is clear and still payments smaller than about 5000sat are being trimmed. Two weeks ago the limit was at 18000sat, which is already below the minimum most MPP splitting algorithms will allow.
Therefore I think it is important that we come up with a different way of ensuring payment proofs are being passed around in the case HTLCs are trimmed.
## Channel closures
Worse than not having HTLCs that can be redeemed is the fact that in the current Lightning implementations channels will be closed by the peer once an HTLC timeout is reached, either to fulfill an HTLC for which that peer has a preimage or to redeem back that expired HTLCs the other party hasn't fulfilled.
For the surprise of everybody, nodes will do this even when the HTLCs in question were trimmed and therefore cannot be redeemed at all. It's very important that nodes stop doing that, because it makes no economic sense at all.
However, that is not so simple, because once you decide you're not going to close the channel, what is the next step? Do you wait until the other peer tries to fulfill an expired HTLC and tell them you won't agree and that you must cancel that instead? That could work sometimes if they're honest (and they have no incentive to not be, in this case). What if they say they tried to fulfill it before but you were offline? Now you're confused, you don't know if you were offline or they were offline, or if they are trying to trick you. Then unsolvable issues start to emerge.
## Arbiters
One simple idea is to use trusted arbiters for all trimmed HTLC issues.
This idea solves both the protocol issue of getting the preimage to the payer once it is released by the payee -- and what to do with the channels once a trimmed HTLC expires.
A simple design would be to have each node hardcode a set of trusted other nodes that can serve as arbiters. Once a channel is opened between two nodes they choose one node from both lists to serve as their mutual arbiter for that channel.
Then whenever one node tries to fulfill an HTLC but the other peer is unresponsive, they can send the preimage to the arbiter instead. The arbiter will then try to contact the unresponsive peer. If it succeeds, then done, the HTLC was fulfilled offchain. If it fails then it can keep trying until the HTLC timeout. And then if the other node comes back later they can eat the loss. The arbiter will ensure they know they are the ones who must eat the loss in this case. If they don't agree to eat the loss, the first peer may then close the channel and blacklist the other peer. If the other peer believes that both the first peer and the arbiter are dishonest they can remove that arbiter from their list of trusted arbiters.
The same happens in the opposite case: if a peer doesn't get a preimage they can notify the arbiter they hadn't received anything. The arbiter may try to ask the other peer for the preimage and, if that fails, settle the dispute for the side of that first peer, which can proceed to fail the HTLC is has with someone else on that route.
-
# IPFS problems: Too much immutability
Content-addressing is unusable with an index or database that describes each piece of content. Since IPFS is fully content-addressable, nothing can be done with it unless you have a non-IPFS index or database, or an internal protocol for dynamic and updateable links.
The IPFS [conceit](nostr:naddr1qqyxyeryx93kxv3nqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cwxek0q) made then go with the with the second option, which proved to be [a failure](nostr:naddr1qqyrvcnx8y6nwwtpqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cz8tlwh). They even incentivized the creation of a [database powered by IPFS](https://github.com/orbitdb/orbit-db), which couldn't be more misguided.
-
# Agentes racionais
Existe essa discussão entre economistas sobre o comportamento de um agente ser "racional" ou não.
O julgamento da racionalidade é feito em vista de um "modelo" concebido pelo economista. Cada economista usa um modelo diferente. Daí ficam todos discutindo a racionalidade ou irracionalidade de certo agente.
A solução é perguntar: racional segundo quais critérios?
-
# Buying versus donating
Currently (and probably it has always been the case) in the Bitcoin community there is some push towards donations as some sort of business model, or in general just a general love for the idea of donations, and I think that is very misguided.
## Two examples of the push for the primacy of donations
For example, there is a general wantness of people to have some sort of "static QR code" or "static Lightning invoice" that people can put on their Twitter profiles to receive donations (sometimes they say "payments" instead of "donations" but to me there is no such a thing as a payment without a good or service being given in exchange, so I'm saying "donations") and that is a hard problem to solve considering the fact that most Lightning wallets are running on phones.
Another example is the "Podcasting 2.0" initiative that tries to integrate podcast players with Lightning wallets so they can send donations to podcast hosts that are running Lightning nodes. Their proponents call it "value for value" (or "value4value", "v4v") and if you ask they will say _value4value_ is a "model" in which the listener gives out in satoshis to the podcast host the same amount of "value" he is getting from listening to that content.
The _value4value_ concept makes it almost explicit the problems I see with this big emphasis on donations the Bitcoin community is making in general. In essence, the idea that the listener is capable of measuring the value it gets from the podcast then converting it into a monetary amount and then donating that is completely wrong and even nonsensical.
### Why _value4value_ is not sound
Basic (Austrian) economics teaches us that all value is ordinal, not cardinal -- i.e., it can't be measured or assigned a number to. One can only know that at some instant they prefer _x_ over _y_, they cannot say _x_ has a value of 10 and _y_ has a value of 9. Because of that, it's a nonsense quest to try determine how much value one is getting from a podcast.
Basic (Austrian) economics also teaches us that exchanges happen when there are differences in the subjective valuations of goods, i.e., Alice can give _x_ to Bob in exchange for _y_ if Alice prefers _y_ over _x_ and Bob prefers _x_ over _y_ at that point. Because of that (and disregarding the previous paragraph), it's futile to expect that the podcast listener will donate _exactly_ the amount he is getting from the podcast in "value".
Because of the two points above, it should also be clear that it is impossible to convert "value" in podcast content form into "value" in satoshis form, but I won't try to explain why that is because this is not an economics textbook.
What actually happens is that whether it's in the _value4value_ context or not, donations are always a somewhat random and subjective amount, if they happen. If I like some content that someone is publishing for free, my decision on if and how much I will donate is never dictated by some nonsense calculation, but by calculation that is governed almost entirely by feeling and animal spirits (but one that also considers how much money I can spare, how much I like that person and how much I perceive they need).
When I go to a normal shop to buy a bottle of milk I look at the bottle of milk and I read its price, and there is one simple decision I have to make: is this bottle of milk worth more than the amount of money that's specified in the price tag? It's a single decision with only two answers: _yes_ or _no_.
While when I see a free form on some free "creator" page asking me to type how much I will donate I have to decide if I will donate, when I will donate, how much I will donate, if I want this donation to be done every month or how will that work going forward? Will I keep consuming the content produced by this person? Will they keep producing? Maybe I'll just listen for free now and do this later as I'm busy, but then will I forget? Maybe I have just donated a lot to someone else and do not have much more money to spare, but now I feel guilty that the other person got all my donation money and this one didn't get anything but I can't go back and ask the other to return the money I just donated -- and so on and so forth.
### Conclusions
Although the paragraphs above are confusing and do not follow a very logical presentation pattern, I hope you got from them why I think donations are much more complicated than purchases, and that repeating the "value for value" mantra doesn't help at all.
Considering that, what I wanted to say is that bitcoiners should give more attention to the other model, in which people produce goods and services and sell them. And that model can be applied successfully to "content creators", podcasters etc in many ways that are probably (I don't have any data backing my claims) better than the donation model.
## Other possible monetization possibilities
For example, I've noticed that many blogs and podcasts with interesting content start to release exclusive episodes as they get big enough. These exclusive episodes are available only for "supporters". This is effectively selling access to the episodes. There is also the "crowdwall" model in which multiple people pay so that some content gets released for free.
We can count even the model in which a donation is not just a blank donation, but gives the donor the right to write something on the screen or something like that -- these are actually not just donations, but purchases of these rights.
Professional videogame streamers have come up with some other interesting ideas. For example, they crowdfund the creation of special content ("if enough people pay I will dress like a rabbit") or they sell the right to participate in the stream somehow (for example, by playing a game with the streamer in some special day).
In these models, the _static QR code_ with which so many people dream doesn't make much sense (if you're selling a specific episode or if a payment is specific to one identifiable person, you need different QR codes or more metadata to be attached to each payment by the payer).
## Addendum
I think people like the donation model very much because they only see the big and super famous people that receive donations. A very small set of people have so many followers that they can live with just donations, even though donations are very inefficient and they could earn more and even deliver better content if they were using some other model.
I imagine that a creator with a high number of followers will get a lot of people that do not donate anything, a lot that will donate a little -- probably less than they would if they were paying -- and eventually a little number of people that will donate way more than they would if they were buying. This last group is probably what makes it worthwhile to work on this donations model for these creators.
The takeaway is that the donations model is not a panacea and is not very good either.
## Related:
- [Donations on the internet](nostr:naddr1qqyrqwp4xsmnsvtxqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cex8903)
-
# A podridão
É razoável dizer que há três tipos de reações à menção do nome [O que é Bitcoin?](nostr:naddr1qqrky6t5vdhkjmspz9mhxue69uhkv6tpw34xze3wvdhk6q3q80cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsxpqqqp65wp3k3fu) no Brasil:
1. A reação das pessoas velhas
Muito sabiamente, as pessoas velhas que já ouviram falar de Bitcoin o encaram ou como uma coisa muito distante e reservada ao conhecimento dos seus sobrinhos que entendem de computador ou como um golpe que se deve temer e do qual o afastamento é imperativo, e de qualquer modo isso não as deve afetar mesmo então para que perder o seu tempo. Essas pessoas estão erradas: nem o sobrinho que entende de computador sabe nada sobre Bitcoin, nem o Bitcoin é um golpe, e nem é o Bitcoin uma coisa totalmente irrelevante para elas.
É razoável ter cautela diante do desconhecido, no que as pessoas velhas fazem bem, mas creio eu que também muito do medo que essas pessoas têm vem da ignorância que foi criada e difundida durante os primeiros 10 anos de Bitcoin por jornalistas analfabetos e desinformados em torno do assunto.
2. A reação das pessoas pragmáticas
"Já tenho um banco e já posso enviar dinheiro, pra que Bitcoin? O quê, eu ainda tenho que pagar para transferir bitcoins? Isso não é vantagem nenhuma!"
Enquanto querem parecer muito pragmáticas e racionais, essas pessoas ignoram vários aspectos das suas próprias vidas, a começar pelo fato de que o uso dos bancos comuns não é gratuito, e depois que a existência desse sistema financeiro no qual elas se crêem muito incluídas e confortáveis é baseada num grande esquema chamado Banco Central, que tem como um dos seus fundamentos a possibilidade da inflação ilimitada da moeda, que torna todas as pessoas mais pobres, incluindo essas mesmas, tão pragmáticas e racionais.
Mais importante é notar que essas pessoas tão racionais foram também ludibriadas pela difusão da ignorância sobre Bitcoin como sendo um sistema de transferência de dinheiro. O Bitcoin não é e não pode ser um sistema de transferência de dinheiro porque ele só pode transferir-se a si mesmo, não pode transferir "dinheiro" no sentido comum dessa palavra (tenho em mente o dinheiro comum no Brasil, os reais). O fato de que haja hoje pessoas que conseguem "transferir dinheiro" usando o Bitcoin é uma coisa totalmente inesperada: a existência de pessoas que trocam bitcoins por reais (e outros dinheiros de outros lugares) e vice-versa. Não era necessário que fosse assim, não estava determinado em lugar nenhum, 10 anos atrás, que haveria demanda por um bem digital sem utilidade imediata nenhuma, foi assim por um milagre.
Porém, o milagre só estará completo quando esses bitcoins se tornarem eles mesmo o dinheiro comum. E aí assim será possível usar o sistema Bitcoin para transferir dinheiro de fato. Antes disso, chamar o Bitcoin de sistema de pagamentos ou qualquer coisa que o valha é perverter-lhe o sentido, é confundir um acidente com a essência da coisa.
3. A reação dos jovens analfabetos
Os jovens analfabetos são as pessoas que usam a expressão "criptos" e freqüentam sítios que dão notícias totalmente irrelevantes sobre "criptomoedas" o dia inteiro. Não sei muito bem como eles vivem porque não lhes suporto a presença, mas são pessoas que estão muito empolgadas com toda a "onda das criptomoedas" e acham tudo muito incrível, tão incrível que acabam se interessando e então comprando todos os tokens vagabundos que inventam. Usam a palavra "decentralizado", um anglicismo muito feio que deveria significar que não existe um centro controlador da moeda x ou y e que o seu protocolo continuaria funcionando mesmo que vários operadores saíssem do ar, mas como o aplicam aos tokens que são literalmente emitidos por um centro controlador com uma figura humana no centro que toma todas as decisões sobre tudo -- como o Ethereum e conseqüentemente todos os milhares de tokens ERC20 criados dentro do sistema Ethereum -- essa palavra não faz mais sentido.
Na sua empolgação e completo desconhecimento sobre como um ente nocivo poderia destruir cadauma das suas criptomoedas tão decentralizadas, ou como mesmo sem ninguém querer uma falha fundamental no protocolo e no sistema de incentivos poderia pôr tudo abaixo, sem imaginar que toda a valorização do token XYZ pode ter sido fabricada de caso pensado pelos seus próprios emissores ou só ser mesmo uma bolha, acabam esses jovens por igualar o token XYZ, ou ETH, BCH ou o que for, ao Bitcoin, ignorando todas as diferenças qualitativas e apenas mencionando de leve as quantitativas.
Misturada à sua empolgação, e como um bônus, surge a perspectiva de ficar rico. Se um desses por algum golpe de sorte surfou em alguma bolha como a de 2017 e conseguiu multiplicar um dinheiro por 10 comprando e vendendo EOS, já começa logo a usar como argumento para convencer os outros de que "criptomoedas são o futuro" o fato de que ele ficou rico. Não subestime a burrice humana.
---
Há jovens no grupo das pessoas velhas, velhas no grupo das pessoas jovens, pessoas que não estão em nenhum dos grupos e pessoas que estão em mais de um grupo, isso não importa.
-
# Scala is such a great language
Scala is amazing. The type system has the perfect balance between flexibility and powerfulness. `match` statements are great. You can write imperative code that looks very nice and expressive (and I haven't tried writing purely functional things yet). Everything is easy to write and cheap and neovim integration works great.
But Java is not great. And the fact that Scala is a JVM language doesn't help because over the years people have written stuff that depends on Java libraries -- and these Java libraries are not as safe as the Scala libraries, they contain reflection, slowness, runtime errors, all kinds of horrors.
Scala is also very tightly associated with Akka, the actor framework, and Akka is a giant collection of anti-patterns. Untyped stuff, reflection, dependency on JVM, basically a lot of javisms. I just arrived and I don't know anything about the Scala history or ecosystem or community, but I have the impression that Akka has prevent more adoption of Scala from decent people that aren't Java programmers.
But luckily there is a solution -- or two solutions: ScalaJS is a great thing that exists. It transpiles Scala code into JavaScript and it runs on NodeJS or in a browser!
Scala Native is a much better deal, though, it compiles to LLVM and then to binary code and you can have single binaries that run directly without a JVM -- not that the single JARs are that bad though, they are great and everybody has Java so I'll take that anytime over C libraries or NPM-distributed software, but direct executables even better. Scala Native just needs a little more love and some libraries and it will be the greatest thing in a couple of years.
-
# jiq-web
Made with [jq-web](nostr:naddr1qqyrzvrzxqcx2dfsqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c90hqwz), a tool to explore JSON using `jq` that works like [jiq](nostr:naddr1qqyrqvfjv33rxcenqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823cd86z7d).
![](https://raw.githubusercontent.com/fiatjaf/jiq-web/master/screenshot.png)
- <https://jq.alhur.es/jiq/>
- <https://github.com/fiatjaf/jiq-web>
## See also
- [jq-finder](nostr:naddr1qqyryvejvycn2cnpqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ccw20rx)
-
# Excerpt of discussion about DIDs and ION
Melvin Carvalho:
> "Not a single entity I know that's doing production deployments has actually vetted did:ion and found it to be production capable."
>
> The guy leading the DID effort said this about ION.
>
> Seems like they may be 6 months away from producing something.
>
> https://github.com/decentralized-identity/ion/blob/master/docs/design.md#operations
>
> In their design it says you can create, update, recover and activate. It is a bit magic
>
> But update implies a versioning system. So you could think of it like updated nostr events. My key is A. Now my key is B. Tied to a content addressable static ID (like a genesis event ID).
>
> Let's see. I hope they produce something useful. Daniel tends not to answer questions
>
> When we first made DID, the idea was that your (d)id and your public key or hash were the same string.
>
> As it evolved, that got broken. Which can be useful because then you can have multiple keys, but the trade off is more management needed.
>
> So you have DID <—> controller two way link. Then it can have multiple keys. Those can be used to update the record. How that chain of records is stored, fetched and verified, is not covered by DID. So everyone will do it differently. Nostr for example could create updating keys with a NIP, it would do the same thing.
Hampus:
> Basically impossible to have a convo with Daniel
Melvin Carvalho:
> true. too often appeals to authority
>
> https://identity.foundation/sidetree/spec/#update
>
> the side tree spec has an update function
>
> 'Retrieve the Update Reveal Value that matches the previously anchored Update Commitment'
>
> so it looks like they have a chain of anchors ... anchored to something, ion nodes i guess, which also run btc full nodes
>
> > 'Embed the Anchor String in the transaction such that it can be located and parsed by any party that traverses the history of the target anchoring system'
>
> > 'By leveraging the blockchain-agnostic Sidetree protocol, ION makes it possible to anchor tens of thousands of DID/DPKI operations on a target chain (in ION's case, Bitcoin) using a single on-chain transaction'
>
> there you go
>
> do a search in the ion repo for 'anchor' and it gives some details
>
> https://github.com/decentralized-identity/ion/blob/66813123cf81ace05cea2039e93ef263952d6283/docs/Q-and-A.md
>
> there's a Q & A
Ruben Somsen:
> Their anchor protocol doesn't guarantee you have a full view of the data, so you can open the commitments to anything you like (provided you pre-committed to different views). In my opinion this makes the commitments pointless.
Melvin Carvalho:
> > 'Assuming you are running your own node, you simply need to submit 1000 create operations to your node (ie. http://localhost:3000/operations) before your batch writer kicks in every 10 minutes by default, the batch writer will batch all 1000 operations into 1 bitcoin transaction thus you'll pay fee for just one transaction to the minor.
> > The technical spec for constructing a create request can be found in Sidetree API spec, but if you know TypeScript, you are better off using the ion-sdk directly, it will save you a lot of time'
>
> seems 3 places data is stored
>
> anchored witness data: bitcoin tx
> batches of operations: ION nodes
> non witness data: maybe IPFS
>
> 'The logical order of operations, as determined by the underlying anchoring system (e.g. Bitcoin block and transaction order). Anchoring systems may widely vary in how they determine the logical order of operations, but the only requirement of an anchoring system is that it can provide a means to deterministically order each operation within a DID’s operational lineage.'
Ruben Somsen:
> It's deterministic given a set of data, but there is no consensus on the set of data, so if A and B get presented with a different set, they come to a different conclusion
Melvin Carvalho:
> yes. "DID owners can create forks within their own DID state history". so you dont actually know the full state. I think the design is of each identity has eventual consistency, with the anchor as the tie break. It can change as you know more data. But doesnt say what happens if two updates happen in the same block.
>
> https://identity.foundation/sidetree/spec/#late-publishing
>
> the aim is eventual consistency
>
> so at any time you might have the wrong state, but if you had every update associated with a given id, you could determine the right order to apply the patches by using the bitcoin block chain (I dont know what happens if there's a tie in the same block) ... there's no peg in or peg out, it's just like version controlled files
Ruben Somsen:
> But you can't know when you've reached the state of eventual consistency (i.e. when you finally have the full data set) and in practice it may never be reached, so there is no consensus
>
> When there is conflicting data, the first one counts and the second one is ignored.
>
> This problem is literally detrimental, yet it is being presented as a minor inconvenience.
Melvin Carvalho:
> yes, and the first one can be published later, so systems will think the second is valid, until it isnt
Ruben Somsen:
> Yup
Melvin Carvalho:
> you could actually steal someone's identity, and revoke all their keys, and then reveal it years later
>
> arguably this is worse than github
Ruben Somsen:
> lol yeah you could
Melvin Carvalho:
> or an attack vector, get someone's ID, but they dont know, then sell them lots of 'cheap' NFTs ... at some point you publish the new keys and reclaim all the tokens ... no one will know when they will be rug pulled
Ruben Somsen:
> That was my original argument, but Daniel claimed the protocol wasn't meant for transfer of ownership rights.
Melvin Carvalho:
> so then i wonder what the primary use cases are that cant be easily achieved already
>
> > 'How Can ION be Used?
> > ION can see a lot of use in many cases. The most obvious use case is for verifiable credentials. A business can credential employees, who can then be verified via blockchain on arrival at their destination. This functionality also raises its use as a means of supplying and verifying international travel documents.'
> >
> > 'Another use case could come from accreditation for organizations. Using an organization's public key, users can verify their accreditation status and trace their accreditation history over time.'
>
> so i guess its an updatable profile page, but you dont know when your profile is compromised. OTOH it's free (their node is paying fees to start with), and improves BTC security budget. If you keep your key safe, and the nodes publish the updates for you, it might be a nice little profile you could use for stuff. But not sure id use it for anything high value. And nostr (for example) can do alot of this already — updatable profiles and publishing
>
> i think it may be to a degree censorship resistant too
>
> there's also no incentive to run a node, so im not sure why they would stick around
>
> nodes can actually very easily censor by black listing certain tx (e.g. a hack, or sanctioned identity), removing the decentralized nature
>
> so the nodes can never reach consensus
Ruben Somsen:
> Like I said, the commitments are pointless. I think it's equivalent to just signing stuff with a key and making what you signed available for download in "the cloud".
-
# A biblioteca infinita
Agora esqueci o nome do conto de Jorge Luis Borges em que a tal biblioteca é descrita, ou seus detalhes específicos. Eu tinha lido o conto e nunca havia percebido que ele matava a questão da aleatoriedade ser capaz de produzir coisas valiosas. Precisei mesmo da [Wikipédia](https://en.wikipedia.org/wiki/Infinite_monkey_theorem) me dizer isso.
Alguns anos atrás levantei essa questão para um grupo de amigos sem saber que era uma questão tão batida e baixa. No meu exemplo era um cachorro andando sobre letras desenhadas e não um macaco numa máquina de escrever. A minha conclusão da discussão foi que não importa o que o cachorro escrevesse, sem uma inteligência capaz de compreender aquilo nada passaria de letras aleatórias.
Borges resolve tudo imaginando uma biblioteca que contém tudo o que o cachorro havia escrito durante todo o infinito em que fez o experimento, e portanto contém todo o conhecimento sobre tudo e todas as obras literárias possíveis -- mas entre cada página ou frase muito boa ou pelo menos legívei há toneladas de livros completamente aleatórios e uma pessoa pode passar a vida dentro dessa biblioteca que contém tanto conhecimento importante e mesmo assim não aprender nada porque nunca vai achar os livros certos.
> Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.
Tenho a impressão de que a publicação gigantesca de artigos, posts, livros e tudo o mais está transformando o mundo nessa biblioteca. Há tanta coisa pra ler que é difícil achar o que presta. As pessoas precisam parar de escrever.
-
# Qual é o economista? (piadas)
O economista americano rapper ficou triste quando sua banda brasileira favorita encerrou suas atividades por crer que a demanda por discos de rap seria cada vez pior.
Resposta: Robert Lucas e as expectativas dos Racionais.
O economista inglês queria muito arrumar uma namorada.
Resposta: John Maynard Keynes e a demanda afetiva.
Quando o filho do economista austríaco chegou em casa todo sujo ele sem nem pensar ordenou que o moleque fosse tomar banho.
Resposta: Friedrich Hayek e a ordem espontânea.
O economista americano tinha muito orgulho de ter em sua casa um valiosíssimo quadro de um impressionista francês.
Resposta: Milton Friedman e o Monet raríssimo.
O economista austríaco jurou aos seus filhos que todos eles se mudariam para Brasília.
Resposta: Eugen von Böhm-Bawerk e o “eu juro” da capital.
O economista alemão organizou um evento meio sertanejo meio religioso e colocou como organizador uma executiva que tinha quebrado suas últimas 4 empresas por má administração.
Resposta: Karl Marx e a expo-oração da que mais-falia.
-
# Batch for Trello
Another Trello helper.
This time it allowed people to perform batch actions on multiple cards, most notably delete a lot of cards at once.
I created it because I wanted to delete a ton of spam from [Boardthreads](nostr:naddr1qqyxvwfk8p3xvdmrqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823ceq46m6).
- <https://github.com/fiatjaf/batch-for-trello>
- <https://batch.cardsync.xyz/>
-
# my personal approach on using `let`, `const` and `var` in javascript
Since these names can be used interchangeably almost everywhere and there are a lot of people asking and searching on the internet on how to use them (myself included until some weeks ago), I developed a personal approach that uses the declarations mostly as readability and code-sense sugar, for helping my mind, instead of expecting them to add physical value to the programs.
---
`let` is only for short-lived variables, defined at a single line and not changed after. Generally those variables which are there only to decrease the amount of typing. For example:
for (let key in something) {
/* we could use `something[key]` for this entire block,
but it would be too much letters and not good for the
fingers or the eyes, so we use a radically temporary variable
*/
let value = something[key]
...
}
`const` for all _names_ known to be constant across the entire module. Not including locally constant values. The `value` in the example above, for example, is constant in its scope and could be declared with `const`, but since there are many iterations and for each one there's a value with same **name**, "value", that could trick the reader into thinking `value` is always the same. Modules and functions are the best example of `const` variables:
const PouchDB = require('pouchdb')
const instantiateDB = function () {}
const codes = {
23: 'atc',
43: 'qwx',
77: 'oxi'
}
`var` for everything that may or not be variable. Names that may confuse people reading the code, even if they are constant locally, and are not suitable for `let` (i.e., they are not completed in a simple direct declaration) apply for being declared with `var`. For example:
var output = '\n'
lines.forEach(line => {
output += ' '
output += line.trim()
output += '\n'
})
output += '\n---'
for (let parent in parents) {
var definitions = {}
definitions.name = getName(parent)
definitions.config = {}
definitions.parent = parent
}
-
# Sistemas legais anárquicos
São poucos os exemplos de sistemas legais claramente anárquicos que nós temos, e são sempre de tempos muito remoto, da idade média ou por aí. Me vêm à cabeça agora o sistema islandês, o somaliano, o irlandês e as cortes dos mercadores da Europa continental.
Esses exemplos, embora sempre pareçam aos olhos de um libertário convicto a prova cabal de que a sociedade sem o Estado é capaz de fazer funcionar sistemas legais eficientes, complexos e muito melhores e mais baratos do que os estatais, a qualquer observador não entusiasmado vão parecer meio anacrônicos: são sempre coisas que envolvem família, clãs, chefes de família, comunidades pequenas -- fatores quase sempre ausentes na sociedade hoje --, o que dá espaço para que a pessoa pense (e eu confesso que isso também sempre me incomodou) que nada disso funcionaria hoje, são bonitos, mas sistemas que só funcionariam nos tempos de antigamente, o Estado com seu sistema judiciário é a evolução natural e necessária de tudo isso e assim por diante.
Vale lembrar, porém, que os exemplos que nós temos provavelmente não surgiram espontamente, eles mesmos foram o resultado de uma evolução lenta mas constante do sistema legal das suas respectivas comunidades. Se não tivessem sido interrompidos pela intervenção de algum Estado, esses sistemas teriam continuado evoluindo e hoje, quem sabe, seriam redes complexas altamente eficientes, que, por que não, juntariam tecnologias similares à internet com segurança de dados, algoritmos maravilhosos de reputação e voto, tudo decentralizado, feito por meio de protocolos concorrentes mas padronizados -- talvez, se tivessem tido um pouquinho mais de tempo, cada um desses sistemas legais anárquicos teria desenvolvido meios de evitar a conquista ou a concorrência desleal de um Estado, ou pelo menos do Estado como nós o conhecemos hoje.
-
# localchat
A server that creates instant chat rooms with [Server-Sent Events](https://en.wikipedia.org/wiki/Server-sent_events) and normal HTTP `POST` requests (instead of WebSockets which are an overkill most of the times).
It defaults to a room named as your public IP, so if two machines in the same LAN connect they'll be in the same chat automatically -- but then you can also join someone else's LAN if you need.
This is supposed to be useful.
![](https://raw.githubusercontent.com/fiatjaf/localchat/master/screenshot.png)
- <https://github.com/fiatjaf/localchat>
- <https://localchat.bigsun.xyz/>
## See also
- [Filemap](nostr:naddr1qqyrwcekv33rze3kqyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c23ya8a)
-
# Just malinvestiment
Traditionally the Austrian Theory of Business Cycles has been explained and reworked in many ways, but the most widely accepted version (or the closest to the Mises or Hayek views) view is that banks (or the central bank) cause the general interest rate to decline by creation of new money and that prompts entrepreneurs to invest in projects of longer duration. This can be confusing because sometimes entrepreneurs embark in very short-time projects during one of these bubbles and still contribute to the overall cycle.
The solution is to think about the "longer term" problem is to think of the entire economy going long-term, not individual entrepreneurs. So if one entrepreneur makes an investiment in a thing that looks simple he may actually, knowingly or not, be inserting himself in a bigger machine that is actually involved in producing longer-term things. Incidentally this thinking also solves the biggest criticism of the Austrian Business Cycle Theory: that of the rational expectations people who say: "oh but can't the entrepreneurs know that the interest rate is artificially low and decide to not make long-term investiments?" ("and if they don't know they should lose money and be replaced like in a normal economy flow blablabla?"). Well, the answer is that they are not really relying on the interest rate, they are only looking for profit opportunities, and this is the key to another confusion that has always followed my thinkings about this topic.
If a guy opens a bar in an area of a town where many new buildings are being built during a "housing bubble" he may not know, but he is inserting himself right into the eye of that business cycle. He expects all these building projects to continue, and all the people involved in that to be getting paid more and be able to spend more at his bar and so on. That is a bet that may or may not end up paying.
Now what does that bar investiment has to do with the interest rate? Nothing. It is just a guy who saw a business opportunity in a place where hungry people with money had no bar to buy things in, so he opened a bar. Additionally the guy has made some calculations about all the ending, starting and future building projects in the area, and then the people that would live or work in that area afterwards (after all the buildings were being built with the expectation of being used) and so on, there is no interest rate calculations involved. And yet that may be a malinvestiment because some building projects will end up being canceled and the expected usage of the finished ones will turn out to be smaller than predicted.
This bubble may have been caused by a decline in interest rates that prompted some people to start buying houses that they wouldn't otherwise, but this is just a small detail. The bubble can only be kept going by a constant influx of new money into the economy, but the focus on the interest rate is wrong. If new money is printed and used by the government to buy ships then there will be a boom and a bubble in the ship market, and that involves all the parts of production process of ships and also bars that will be opened near areas of the town where ships are built and new people are being hired with higher salaries to do things that will eventually contribute to the production of ships that will then be sold to the government.
It's not interest rates or the length of the production process that matters, it's just printed money and malinvestiment.
-
# O mito do objetivo
O insight [deste cara](https://www.youtube.com/watch?v=dXQPL9GooyI) segundo o qual buscar objetivos fixos, além de matar a criatividade, ainda não consegue atingir o tal objetivo -- que é uma coisa na qual eu sempre acreditei, embora sem muitas confirmações e (talvez por isso) sem dizê-lo abertamente --, combina com a idéia geral de que todas as estruturas sociais que valem alguma coisa surgem do jogo e brincadeira.
A seriedade, que é o oposto da brincadeira, é representada aqui pelo objetivo. Pessoas muito sérias com um planejamento e um objetivo final, tudo esquematizado.
---
Na verdade esse insight é bem manjado. Até eu mesmo já o tinha mencionado, citando Taleb em [Processos Antifrágeis](nostr:naddr1qqyryv3hxfsnvvm9qyghwumn8ghj7enfv96x5ctx9e3k7mgzyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqq823c5jshx7).
E finalmente há esta tirinha que eu achei aleatoriamente e que bem o representa: [![](https://assets.amuniversal.com/d7834b406d5301301d7c001dd8b71c47)](https://dilbert.com/strip/2004-04-17)
-
# Profits, not wages, as the originary factor
Adam Smith says that there were first workers earning wages, but then came the capitalists and extracted profits from those wages.
But in fact if that primitive state ever existed there were no workers, but entrepreneursearning profit. And since they were not capitalists ("capitalist" defined as someone that buys with the intent of selling) they were earning an infinite rate of profit.
When capitalists came they were responsible for introducing costs (investment) reducing thus the rate of profit -- and the more capitalistic the society the smaller the rate of profits.
-- George Reisman in <https://www.bobmurphyshow.com/139>
-
The new version of nsecBunker represents a significant increase on what is possible with regards to onboarding new users to Nostr.
With this new release, I added the possibility of running bunker in a way that allows new users to create their account remotely from any compatible client.
This new release also connects nsecBunker with LNBits, so that new users immediately get a LN wallet AND zapping capabilities!
# 🤯
Under the hood, this is using the flow described here https://github.com/kind-0/nsecbunkerd/blob/master/OAUTH-LIKE-FLOW.md