-

@ b8a9df82:6ab5cbbd
2025-03-06 22:39:15
Last week at Bitcoin Investment Week in New York City, hosted by Anthony Pompliano, Jack Mallers walked in wearing sneakers and a T-shirt, casually dropping, “Man… I hate politics.”
That was it. That was the moment I felt aligned again. That’s the energy I came for. No suits. No corporate jargon. Just a guy who gets it—who cares about people, bringing Bitcoin-powered payments to the masses and making sure people can actually use it.
His presence was a reminder of why we’re here in the first place. And his words—“I hate politics”—were a breath of fresh air.
Now, don’t get me wrong. Anthony was a fantastic host. His ability to mix wittiness, playfulness, and seriousness made him an entertaining moderator. But this week was unlike anything I’ve ever experienced in the Bitcoin ecosystem.
One of the biggest letdowns was the lack of interaction. No real Q&A sessions, no direct engagement, no real discussions. Just one fireside chat after another.
And sure, I get it—people love to hear themselves talk. But where were the questions? The critical debates? The chance for the audience to actually participate?
I’m used to Bitcoin meetups and conferences where you walk away with new ideas, new friends, and maybe even a new project to contribute to. Here, it was more like sitting in an expensive lecture hall, watching a lineup of speakers tell us things we already know.
A different vibe—and not in a good way
Over the past few months, I’ve attended nearly ten Bitcoin conferences, each leaving me feeling uplifted, inspired, and ready to take action. But this? This felt different. And not in a good way.
If this had been my first Bitcoin event, I might have walked away questioning whether I even belonged here. It wasn’t Prague. It wasn’t Riga. It wasn’t the buzzing, grassroots, pleb-filled gatherings I had grown to love. Instead, it felt more like a Wall Street networking event disguised as a Bitcoin conference.
Maybe it was the suits.
Or the fact that I was sitting in a room full of investors who have no problem dropping $1,000+ on a ticket.
Or that it reminded me way too much of my former life—working as a manager in London’s real estate industry, navigating boardrooms full of finance guys in polished shoes, talking about “assets under management.”
Bitcoin isn’t just an investment thesis. It’s a revolution. A movement. And yet, at times during this week, I felt like I was back in my fiat past, stuck in a room where people measured success in dollars, not in freedom.
Maybe that’s the point. Bitcoin Investment Week was never meant to be a pleb gathering.
That said, the week did have some bright spots. PubKey was a fantastic kickoff. That was real Bitcoin culture—plebs, Nostr, grassroots energy. People who actually use Bitcoin, not just talk about it.
But the absolute highlight? Jack Mallers, sneakers and all, cutting through the noise with his authenticity.
So, why did we even go?
Good question. Maybe it was curiosity. Maybe it was stepping out of our usual circles to see Bitcoin through a different lens. Maybe it was to remind ourselves why we chose this path in the first place.
Would I go again? Probably not.
Would I trade Prague, Riga, bitcoin++ or any of the grassroots Bitcoin conferences for this? Not a chance.
At the end of the day, Bitcoin doesn’t belong to Wall Street from my opinion. It belongs to the people who actually use it. And those are the people I want to be around.
-

@ fcd81845:5c1832a7
2025-03-06 22:23:13
# Price Updates
In order to improve the services we offer, we are increasing the prices effective date March 6th 2025:
Here are the changes (all prices are per month, rolling contract):
| name | Price | New Price |
| ---- | ----- | --------- |
| Tiny | 2 EUR | 2.70 EUR |
| Small | 4 EUR | 5.10 EUR |
| Medium | 8 EUR | 9.90 EUR |
| Large | 17 EUR | 21.90 EUR |
| X-Large | 30 EUR | 39.90 EUR |
| XX-Large | 45 EUR | 55.50 EUR |
These changes coincide with the release of custom pricing!

### We have also release a few other features:
- User configurable PTR records
- Separate billing page on VM info view
- VM resource usage graphs
- New VM's are assigned a forward DNS entry on lnvps.cloud (eg. vm-1.lnvps.cloud) existing VM's will have a forward entry added at a later date.
- As well as many other smaller improvements in the handling of resource allocation
-

@ 000002de:c05780a7
2025-03-06 22:15:39
Been hearing clips of Newsom's new podcast. I've long said Newsom will run for president. I was saying this when he was the mayor of San Fransisco. He is like a modern day Bill Clinton. He is VERY gifted with the skills a politician needs. He's cool and calm. He's quick and sharp. His podcast isn't terrible and he's talking to people that disagree with him. He is also pissing off the more extreme members of his party by his pivots on many issues. He's even talking about men in women's sports.
Make no mistake. I think the dude is a snake and criminal. I hope he never gets any other political office. I just think MANY, most people on the right underestimate this man. Had the Biden crime family actually cared about their party they would have stepped down and let Newsom run. I think he would have defeated Trump.
I know that will piss many of you off but I do not believe the US changed because the Orange man won an election. Trump was shooting fish in a barrel in the last election. Two attempts were made on his life. Biden ran the US into the ground. Harris is a joke. Newsom is not. Newsom is not a radical. He will move to the center and that will appeal to lot of people. Fools, but they are what they are.
originally posted at https://stacker.news/items/906052
-

@ d34e832d:383f78d0
2025-03-06 22:14:05
---
_A comprehensive system for archiving and managing large datasets efficiently on Linux._
---
## **1. Planning Your Data Archiving Strategy**
Before starting, define the structure of your archive:
✅ **What are you storing?** Books, PDFs, videos, software, research papers, backups, etc.
✅ **How often will you access the data?** Frequently accessed data should be on SSDs, while deep archives can remain on HDDs.
✅ **What organization method will you use?** Folder hierarchy and indexing are critical for retrieval.
---
## **2. Choosing the Right Storage Setup**
Since you plan to use **2TB HDDs and store them away**, here are Linux-friendly storage solutions:
### **📀 Offline Storage: Hard Drives & Optical Media**
✔ **External HDDs (2TB each)** – Use `ext4` or `XFS` for best performance.
✔ **M-DISC Blu-rays (100GB per disc)** – Excellent for long-term storage.
✔ **SSD (for fast access archives)** – More durable than HDDs but pricier.
### **🛠 Best Practices for Hard Drive Storage on Linux**
🔹 **Use `smartctl` to monitor drive health**
```bash
sudo apt install smartmontools
sudo smartctl -a /dev/sdX
```
🔹 **Store drives vertically in anti-static bags.**
🔹 **Rotate drives periodically** to prevent degradation.
🔹 **Keep in a cool, dry, dark place.**
### **☁ Cloud Backup (Optional)**
✔ **Arweave** – Decentralized storage for public data.
✔ **rclone + Backblaze B2/Wasabi** – Cheap, encrypted backups.
✔ **Self-hosted options** – Nextcloud, Syncthing, IPFS.
---
## **3. Organizing and Indexing Your Data**
### **📂 Folder Structure (Linux-Friendly)**
Use a clear hierarchy:
```plaintext
📁 /mnt/archive/
📁 Books/
📁 Fiction/
📁 Non-Fiction/
📁 Software/
📁 Research_Papers/
📁 Backups/
```
💡 **Use YYYY-MM-DD format for filenames**
✅ `2025-01-01_Backup_ProjectX.tar.gz`
✅ `2024_Complete_Library_Fiction.epub`
### **📑 Indexing Your Archives**
Use Linux tools to catalog your archive:
✔ **Generate a file index of a drive:**
```bash
find /mnt/DriveX > ~/Indexes/DriveX_index.txt
```
✔ **Use `locate` for fast searches:**
```bash
sudo updatedb # Update database
locate filename
```
✔ **Use `Recoll` for full-text search:**
```bash
sudo apt install recoll
recoll
```
🚀 **Store index files on a "Master Archive Index" USB drive.**
---
## **4. Compressing & Deduplicating Data**
To **save space and remove duplicates**, use:
✔ **Compression Tools:**
- `tar -cvf archive.tar folder/ && zstd archive.tar` (fast, modern compression)
- `7z a archive.7z folder/` (best for text-heavy files)
✔ **Deduplication Tools:**
- `fdupes -r /mnt/archive/` (finds duplicate files)
- `rdfind -deleteduplicates true /mnt/archive/` (removes duplicates automatically)
💡 **Use `par2` to create parity files for recovery:**
```bash
par2 create -r10 file.par2 file.ext
```
This helps reconstruct corrupted archives.
---
## **5. Ensuring Long-Term Data Integrity**
Data can degrade over time. Use **checksums** to verify files.
✔ **Generate Checksums:**
```bash
sha256sum filename.ext > filename.sha256
```
✔ **Verify Data Integrity Periodically:**
```bash
sha256sum -c filename.sha256
```
🔹 Use `SnapRAID` for multi-disk redundancy:
```bash
sudo apt install snapraid
snapraid sync
snapraid scrub
```
🔹 Consider **ZFS or Btrfs** for automatic error correction:
```bash
sudo apt install zfsutils-linux
zpool create archivepool /dev/sdX
```
---
## **6. Accessing Your Data Efficiently**
Even when archived, you may need to access files quickly.
✔ **Use Symbolic Links to "fake" files still being on your system:**
```bash
ln -s /mnt/driveX/mybook.pdf ~/Documents/
```
✔ **Use a Local Search Engine (`Recoll`):**
```bash
recoll
```
✔ **Search within text files using `grep`:**
```bash
grep -rnw '/mnt/archive/' -e 'Bitcoin'
```
---
## **7. Scaling Up & Expanding Your Archive**
Since you're storing **2TB drives and setting them aside**, keep them numbered and logged.
### **📦 Physical Storage & Labeling**
✔ Store each drive in **fireproof safe or waterproof cases**.
✔ Label drives (`Drive_001`, `Drive_002`, etc.).
✔ Maintain a **printed master list** of drive contents.
### **📶 Network Storage for Easy Access**
If your archive **grows too large**, consider:
- **NAS (TrueNAS, OpenMediaVault)** – Linux-based network storage.
- **JBOD (Just a Bunch of Disks)** – Cheap and easy expansion.
- **Deduplicated Storage** – `ZFS`/`Btrfs` with auto-checksumming.
---
## **8. Automating Your Archival Process**
If you frequently update your archive, automation is essential.
### **✔ Backup Scripts (Linux)**
#### **Use `rsync` for incremental backups:**
```bash
rsync -av --progress /source/ /mnt/archive/
```
#### **Automate Backup with Cron Jobs**
```bash
crontab -e
```
Add:
```plaintext
0 3 * * * rsync -av --delete /source/ /mnt/archive/
```
This runs the backup every night at 3 AM.
#### **Automate Index Updates**
```bash
0 4 * * * find /mnt/archive > ~/Indexes/master_index.txt
```
---
## **So Making These Considerations**
✔ **Be Consistent** – Maintain a structured system.
✔ **Test Your Backups** – Ensure archives are not corrupted before deleting originals.
✔ **Plan for Growth** – Maintain an efficient catalog as data expands.
For data hoarders seeking reliable 2TB storage solutions and appropriate physical storage containers, here's a comprehensive overview:
## **2TB Storage Options**
**1. Hard Disk Drives (HDDs):**
- **Western Digital My Book Series:** These external HDDs are designed to resemble a standard black hardback book. They come in various editions, such as Essential, Premium, and Studio, catering to different user needs. citeturn0search19
- **Seagate Barracuda Series:** Known for affordability and performance, these HDDs are suitable for general usage, including data hoarding. They offer storage capacities ranging from 500GB to 8TB, with speeds up to 190MB/s. citeturn0search20
**2. Solid State Drives (SSDs):**
- **Seagate Barracuda SSDs:** These SSDs come with either SATA or NVMe interfaces, storage sizes from 240GB to 2TB, and read speeds up to 560MB/s for SATA and 3,400MB/s for NVMe. They are ideal for faster data access and reliability. citeturn0search20
**3. Network Attached Storage (NAS) Drives:**
- **Seagate IronWolf Series:** Designed for NAS devices, these drives offer HDD storage capacities from 1TB to 20TB and SSD capacities from 240GB to 4TB. They are optimized for multi-user environments and continuous operation. citeturn0search20
## **Physical Storage Containers for 2TB Drives**
Proper storage of your drives is crucial to ensure data integrity and longevity. Here are some recommendations:
**1. Anti-Static Bags:**
Essential for protecting drives from electrostatic discharge, especially during handling and transportation.
**2. Protective Cases:**
- **Hard Drive Carrying Cases:** These cases offer padded compartments to securely hold individual drives, protecting them from physical shocks and environmental factors.
**3. Storage Boxes:**
- **Anti-Static Storage Boxes:** Designed to hold multiple drives, these boxes provide organized storage with anti-static protection, ideal for archiving purposes.
**4. Drive Caddies and Enclosures:**
- **HDD/SSD Enclosures:** These allow internal drives to function as external drives, offering both protection and versatility in connectivity.
**5. Fireproof and Waterproof Safes:**
For long-term storage, consider safes that protect against environmental hazards, ensuring data preservation even in adverse conditions.
**Storage Tips:**
- **Labeling:** Clearly label each drive with its contents and date of storage for easy identification.
- **Climate Control:** Store drives in a cool, dry environment to prevent data degradation over time.
By selecting appropriate 2TB storage solutions and ensuring they are stored in suitable containers, you can effectively manage and protect your data hoard.
Here’s a set of custom **Bash scripts** to automate your archival workflow on Linux:
### **1️⃣ Compression & Archiving Script**
This script compresses and archives files, organizing them by date.
```bash
#!/bin/bash
# Compress and archive files into dated folders
ARCHIVE_DIR="/mnt/backup"
DATE=$(date +"%Y-%m-%d")
BACKUP_DIR="$ARCHIVE_DIR/$DATE"
mkdir -p "$BACKUP_DIR"
# Find and compress files
find ~/Documents -type f -mtime -7 -print0 | tar --null -czvf "$BACKUP_DIR/archive.tar.gz" --files-from -
echo "Backup completed: $BACKUP_DIR/archive.tar.gz"
```
---
### **2️⃣ Indexing Script**
This script creates a list of all archived files and saves it for easy lookup.
```bash
#!/bin/bash
# Generate an index file for all backups
ARCHIVE_DIR="/mnt/backup"
INDEX_FILE="$ARCHIVE_DIR/index.txt"
find "$ARCHIVE_DIR" -type f -name "*.tar.gz" > "$INDEX_FILE"
echo "Index file updated: $INDEX_FILE"
```
---
### **3️⃣ Storage Space Monitor**
This script alerts you if the disk usage exceeds 90%.
```bash
#!/bin/bash
# Monitor storage usage
THRESHOLD=90
USAGE=$(df -h | grep '/mnt/backup' | awk '{print $5}' | sed 's/%//')
if [ "$USAGE" -gt "$THRESHOLD" ]; then
echo "WARNING: Disk usage at $USAGE%!"
fi
```
---
### **4️⃣ Automatic HDD Swap Alert**
This script checks if a new 2TB drive is connected and notifies you.
```bash
#!/bin/bash
# Detect new drives and notify
WATCHED_SIZE="2T"
DEVICE=$(lsblk -dn -o NAME,SIZE | grep "$WATCHED_SIZE" | awk '{print $1}')
if [ -n "$DEVICE" ]; then
echo "New 2TB drive detected: /dev/$DEVICE"
fi
```
---
### **5️⃣ Symbolic Link Organizer**
This script creates symlinks to easily access archived files from a single directory.
```bash
#!/bin/bash
# Organize files using symbolic links
ARCHIVE_DIR="/mnt/backup"
LINK_DIR="$HOME/Archive_Links"
mkdir -p "$LINK_DIR"
ln -s "$ARCHIVE_DIR"/*/*.tar.gz "$LINK_DIR/"
echo "Symbolic links updated in $LINK_DIR"
```
---
#### 🔥 **How to Use These Scripts:**
1. **Save each script** as a `.sh` file.
2. **Make them executable** using:
```bash
chmod +x script_name.sh
```
3. **Run manually or set up a cron job** for automation:
```bash
crontab -e
```
Add this line to run the backup every Sunday at midnight:
```bash
0 0 * * 0 /path/to/backup_script.sh
```
Here's a **Bash script** to encrypt your backups using **GPG (GnuPG)** for strong encryption. 🚀
---
### 🔐 **Backup & Encrypt Script**
This script will:
✅ **Compress** files into an archive
✅ **Encrypt** it using **GPG**
✅ **Store** it in a secure location
```bash
#!/bin/bash
# Backup and encrypt script
ARCHIVE_DIR="/mnt/backup"
DATE=$(date +"%Y-%m-%d")
BACKUP_FILE="$ARCHIVE_DIR/backup_$DATE.tar.gz"
ENCRYPTED_FILE="$BACKUP_FILE.gpg"
GPG_RECIPIENT="your@email.com" # Change this to your GPG key or use --symmetric for password-based encryption
mkdir -p "$ARCHIVE_DIR"
# Compress files
tar -czvf "$BACKUP_FILE" ~/Documents
# Encrypt the backup using GPG
gpg --output "$ENCRYPTED_FILE" --encrypt --recipient "$GPG_RECIPIENT" "$BACKUP_FILE"
# Verify encryption success
if [ -f "$ENCRYPTED_FILE" ]; then
echo "Backup encrypted successfully: $ENCRYPTED_FILE"
rm "$BACKUP_FILE" # Remove unencrypted file for security
else
echo "Encryption failed!"
fi
```
---
### 🔓 **Decrypting a Backup**
To restore a backup, run:
```bash
gpg --decrypt --output backup.tar.gz backup_YYYY-MM-DD.tar.gz.gpg
tar -xzvf backup.tar.gz
```
---
### 🔁 **Automating with Cron**
To run this script every Sunday at midnight:
```bash
crontab -e
```
Add this line:
```bash
0 0 * * 0 /path/to/encrypt_backup.sh
```
---
### 🔐 **Backup & Encrypt Script (Password-Based)**
This script:
✅ Compresses files into an archive
✅ Encrypts them using **GPG with a passphrase**
✅ Stores them in a secure location
```bash
#!/bin/bash
# Backup and encrypt script (password-based)
ARCHIVE_DIR="/mnt/backup"
DATE=$(date +"%Y-%m-%d")
BACKUP_FILE="$ARCHIVE_DIR/backup_$DATE.tar.gz"
ENCRYPTED_FILE="$BACKUP_FILE.gpg"
PASSPHRASE="YourStrongPassphraseHere" # Change this!
mkdir -p "$ARCHIVE_DIR"
# Compress files
tar -czvf "$BACKUP_FILE" ~/Documents
# Encrypt the backup with a password
gpg --batch --yes --passphrase "$PASSPHRASE" --symmetric --cipher-algo AES256 --output "$ENCRYPTED_FILE" "$BACKUP_FILE"
# Verify encryption success
if [ -f "$ENCRYPTED_FILE" ]; then
echo "Backup encrypted successfully: $ENCRYPTED_FILE"
rm "$BACKUP_FILE" # Remove unencrypted file for security
else
echo "Encryption failed!"
fi
```
---
### 🔓 **Decrypting a Backup**
To restore a backup, run:
```bash
gpg --batch --yes --passphrase "YourStrongPassphraseHere" --decrypt --output backup.tar.gz backup_YYYY-MM-DD.tar.gz.gpg
tar -xzvf backup.tar.gz
```
---
### 🔁 **Automating with Cron**
To run this script every Sunday at midnight:
```bash
crontab -e
```
Add this line:
```bash
0 0 * * 0 /path/to/encrypt_backup.sh
```
---
### 🔥 **Security Best Practices**
- **Do NOT hardcode the password in the script.** Instead, store it in a secure location like a `.gpg-pass` file and use:
```bash
PASSPHRASE=$(cat /path/to/.gpg-pass)
```
- **Use a strong passphrase** with at least **16+ characters**.
- **Consider using a hardware security key** or **YubiKey** for extra security.
---
Here's how you can add **automatic cloud syncing** to your encrypted backups. This script will sync your encrypted backups to a cloud storage service like **Rsync**, **Dropbox**, or **Nextcloud** using the **rclone** tool, which is compatible with many cloud providers.
### **Step 1: Install rclone**
First, you need to install `rclone` if you haven't already. It’s a powerful tool for managing cloud storage.
1. Install rclone:
```bash
curl https://rclone.org/install.sh | sudo bash
```
2. Configure rclone with your cloud provider (e.g., Google Drive):
```bash
rclone config
```
Follow the prompts to set up your cloud provider. After configuration, you'll have a "remote" (e.g., `rsync` for https://rsync.net) to use in the script.
---
### 🔐 **Backup, Encrypt, and Sync to Cloud Script**
This script will:
✅ Compress files into an archive
✅ Encrypt them with a password
✅ Sync the encrypted backup to the cloud storage
```bash
#!/bin/bash
# Backup, encrypt, and sync to cloud script (password-based)
ARCHIVE_DIR="/mnt/backup"
DATE=$(date +"%Y-%m-%d")
BACKUP_FILE="$ARCHIVE_DIR/backup_$DATE.tar.gz"
ENCRYPTED_FILE="$BACKUP_FILE.gpg"
PASSPHRASE="YourStrongPassphraseHere" # Change this!
# Cloud configuration (rclone remote name)
CLOUD_REMOTE="gdrive" # Change this to your remote name (e.g., 'gdrive', 'dropbox', 'nextcloud')
CLOUD_DIR="backups" # Cloud directory where backups will be stored
mkdir -p "$ARCHIVE_DIR"
# Compress files
tar -czvf "$BACKUP_FILE" ~/Documents
# Encrypt the backup with a password
gpg --batch --yes --passphrase "$PASSPHRASE" --symmetric --cipher-algo AES256 --output "$ENCRYPTED_FILE" "$BACKUP_FILE"
# Verify encryption success
if [ -f "$ENCRYPTED_FILE" ]; then
echo "Backup encrypted successfully: $ENCRYPTED_FILE"
rm "$BACKUP_FILE" # Remove unencrypted file for security
# Sync the encrypted backup to the cloud using rclone
rclone copy "$ENCRYPTED_FILE" "$CLOUD_REMOTE:$CLOUD_DIR" --progress
# Verify sync success
if [ $? -eq 0 ]; then
echo "Backup successfully synced to cloud: $CLOUD_REMOTE:$CLOUD_DIR"
rm "$ENCRYPTED_FILE" # Remove local backup after syncing
else
echo "Cloud sync failed!"
fi
else
echo "Encryption failed!"
fi
```
---
### **How to Use the Script:**
1. **Edit the script**:
- Change the `PASSPHRASE` to a secure passphrase.
- Change `CLOUD_REMOTE` to your cloud provider’s rclone remote name (e.g., `gdrive`, `dropbox`).
- Change `CLOUD_DIR` to the cloud folder where you'd like to store the backup.
2. **Set up a cron job** for automatic backups:
- To run the backup every Sunday at midnight, add this line to your crontab:
```bash
crontab -e
```
Add:
```bash
0 0 * * 0 /path/to/backup_encrypt_sync.sh
```
---
### 🔥 **Security Tips:**
- **Store the passphrase securely** (e.g., use a `.gpg-pass` file with `cat /path/to/.gpg-pass`).
- Use **rclone's encryption** feature for sensitive data in the cloud if you want to encrypt before uploading.
- Use **multiple cloud services** (e.g., Google Drive and Dropbox) for redundancy.
---
+------------------------------+
| 1. Planning Data Archiving |
| Strategy |
+------------------------------+
|
v
+------------------------------+
| What are you storing? |
| (Books, PDFs, Software, etc.)|
+------------------------------+
|
v
+------------------------------+
| How often to access data? |
| (Fast SSD vs. Long-term HDD) |
+------------------------------+
|
v
+------------------------------+
| Organization method (Folder |
| structure, indexing) |
+------------------------------+
|
v
+------------------------------+
| 2. Choosing Right Storage |
| Setup |
+------------------------------+
|
v
+-----------------------------------------------+
| HDDs (2TB), M-DISC Blu-rays, or SSD for fast |
| access archives |
+-----------------------------------------------+
|
v
+-----------------------------------------------+
| Offline Storage - Best Practices: |
| Use ext4/XFS, store vertically, rotate, etc. |
+-----------------------------------------------+
|
v
+------------------------------+
| 3. Organizing & Indexing |
| Your Data |
+------------------------------+
|
v
+------------------------------+
| Folder structure (YYYY-MM-DD)|
+------------------------------+
|
v
+------------------------------+
| Indexing: locate, Recoll, find|
| command |
+------------------------------+
|
v
+------------------------------+
| 4. Compress & Deduplicate |
| Data |
+------------------------------+
|
v
+-----------------------------------------------+
| Use compression tools (tar, 7z) & dedup tools |
| (fdupes, rdfind) |
+-----------------------------------------------+
|
v
+------------------------------+
| 5. Ensuring Long-Term Data |
| Integrity |
+------------------------------+
|
v
+-----------------------------------------------+
| Generate checksums, periodic verification |
| SnapRAID or ZFS for redundancy |
+-----------------------------------------------+
|
v
+------------------------------+
| 6. Accessing Data Efficiently|
+------------------------------+
|
v
+-----------------------------------------------+
| Use symlinks, local search engines, grep |
+-----------------------------------------------+
|
v
+------------------------------+
| 7. Scaling & Expanding Your |
| Archive |
+------------------------------+
|
v
+-----------------------------------------------+
| Physical storage options (fireproof safe) |
| Network storage (NAS, JBOD) |
+-----------------------------------------------+
|
v
+------------------------------+
| 8. Automating Your Archival |
| Process |
+------------------------------+
|
v
+-----------------------------------------------+
| Use cron jobs, backup scripts (rsync) |
| for automated updates |
+-----------------------------------------------+
-

@ d34e832d:383f78d0
2025-03-06 21:57:23
https://pub-53ed77d5544b46628691823c1795f2c7.r2.dev/Reticulum-Unstoppable-Network-Compressed.mp4
[npub16d8gxt2z4k9e8sdpc0yyqzf5gp0np09ls4lnn630qzxzvwpl0rgq5h4rzv]
### **What is Reticulum?**
Reticulum is a cryptographic networking stack designed for resilient, decentralized, and censorship-resistant communication. Unlike the traditional internet, Reticulum enables fully independent digital communications over various physical mediums, such as radio, LoRa, serial links, and even TCP/IP.
The key advantages of Reticulum include:
- **Decentralization** – No reliance on centralized infrastructure.
- **Encryption & Privacy** – End-to-end encryption built-in.
- **Resilience** – Operates over unreliable and low-bandwidth links.
- **Interoperability** – Works over WiFi, LoRa, Bluetooth, and more.
- **Ease of Use** – Can run on minimal hardware, including Raspberry Pi and embedded devices.
Reticulum is ideal for off-grid, censorship-resistant communications, emergency preparedness, and secure messaging.
---
## **1. Getting Started with Reticulum**
To quickly get started with Reticulum, follow the official guide:
[Reticulum: Getting Started Fast](https://markqvist.github.io/Reticulum/manual/gettingstartedfast.html)
### **Step 1: Install Reticulum**
#### **On Linux (Debian/Ubuntu-based systems)**
```sh
sudo apt update && sudo apt upgrade -y
sudo apt install -y python3-pip
pip3 install rns
```
#### **On Raspberry Pi or ARM-based Systems**
```sh
pip3 install rns
```
#### **On Windows**
Using Windows Subsystem for Linux (WSL) or Python:
```sh
pip install rns
```
#### **On macOS**
```sh
pip3 install rns
```
---
## **2. Configuring Reticulum**
Once installed, Reticulum needs a configuration file. The default location is:
```sh
~/.config/reticulum/config.toml
```
To generate the default configuration:
```sh
rnsd
```
This creates a configuration file with default settings.
---
## **3. Using Reticulum**
### **Starting the Reticulum Daemon**
To run the Reticulum daemon (`rnsd`), use:
```sh
rnsd
```
This starts the network stack, allowing applications to communicate over Reticulum.
### **Testing Your Reticulum Node**
Run the diagnostic tool to ensure your node is functioning:
```sh
rnstatus
```
This shows the status of all connected interfaces and peers.
---
## **4. Adding Interfaces**
### **LoRa Interface (for Off-Grid Communications)**
Reticulum supports long-range LoRa radios like the **RAK Wireless** and **Meshtastic devices**. To add a LoRa interface, edit `config.toml` and add:
```toml
[[interfaces]]
type = "LoRa"
name = "My_LoRa_Interface"
frequency = 868.0
bandwidth = 125
spreading_factor = 9
```
Restart Reticulum to apply the changes.
### **Serial (For Direct Device-to-Device Links)**
For communication over serial links (e.g., between two Raspberry Pis):
```toml
[[interfaces]]
type = "Serial"
port = "/dev/ttyUSB0"
baudrate = 115200
```
### **TCP/IP (For Internet-Based Nodes)**
If you want to bridge your Reticulum node over an existing IP network:
```toml
[[interfaces]]
type = "TCP"
listen = true
bind = "0.0.0.0"
port = 4242
```
---
## **5. Applications Using Reticulum**
### **LXMF (LoRa Mesh Messaging Framework)**
LXMF is a delay-tolerant, fully decentralized messaging system that operates over Reticulum. It allows encrypted, store-and-forward messaging without requiring an always-online server.
To install:
```sh
pip3 install lxmf
```
To start the LXMF node:
```sh
lxmfd
```
### **Nomad Network (Decentralized Chat & File Sharing)**
Nomad is a Reticulum-based chat and file-sharing platform, ideal for **off-grid** communication.
To install:
```sh
pip3 install nomad-network
```
To run:
```sh
nomad
```
### **Mesh Networking with Meshtastic & Reticulum**
Reticulum can work alongside **Meshtastic** for true decentralized long-range communication.
To set up a Meshtastic bridge:
```toml
[[interfaces]]
type = "LoRa"
port = "/dev/ttyUSB0"
baudrate = 115200
```
---
## **6. Security & Privacy Features**
- **Automatic End-to-End Encryption** – Every message is encrypted by default.
- **No Centralized Logging** – Communication leaves no metadata traces.
- **Self-Healing Routing** – Designed to work in unstable or hostile environments.
---
## **7. Practical Use Cases**
- **Off-Grid Communication** – Works in remote areas without cellular service.
- **Censorship Resistance** – Cannot be blocked by ISPs or governments.
- **Emergency Networks** – Enables resilient communication during disasters.
- **Private P2P Networks** – Create a secure, encrypted communication layer.
---
## **8. Further Exploration & Documentation**
- **Reticulum Official Manual**: [https://markqvist.github.io/Reticulum/manual/](https://markqvist.github.io/Reticulum/manual/)
- **Reticulum GitHub Repository**: [https://github.com/markqvist/Reticulum](https://github.com/markqvist/Reticulum)
- **Nomad Network**: [https://github.com/markqvist/NomadNet](https://github.com/markqvist/NomadNet)
- **Meshtastic + Reticulum**: [https://meshtastic.org](https://meshtastic.org)
---
## **Connections (Links to Other Notes)**
- **Mesh Networking for Decentralized Communication**
- **LoRa and Off-Grid Bitcoin Transactions**
- **Censorship-Resistant Communication Using Nostr & Reticulum**
## **Tags**
#Reticulum #DecentralizedComms #MeshNetworking #CensorshipResistance #LoRa
## **Donations via**
- **Bitcoin Lightning**: lightninglayerhash@getalby.com
-

@ 43baaf0c:d193e34c
2025-03-06 21:38:10
From Bangkok to Las Palmas de Gran Canaria.

For the past three years, I’ve traveled from Bangkok to Las Palmas de Gran Canaria, with a stop in Dubai a 24-hour journey that brings me back to Europe and to my artist friend, Alecs Navio. Along with his wife, he runs a coworking space called Soppa de Azul.
The main reason I return here is to create new art. Alecs constantly inspires me—we talk about art, artists, and he shares books that spark new ideas for my work. As an artist, I believe it’s essential to keep evolving. Growth comes from inspiration, and there’s no better source than fellow artists. Surrounding yourself with creative minds fuels your passion, and it all starts with conversations about art and life.

Today was a perfect example of why I’m here. I looked at some of my older artwork hanging in the coworking space and said I didn’t like it anymore. Alecs reminded me that I should appreciate my past work because it’s part of my journey. Without it I wouldn’t be the artist I am today.
I always say the journey is the destination, and Alecs helped me see that this applies to art as well. This is why I believe in surrounding myself with people who inspire me those who celebrate my growth and remind me why they are such an important part of my journey.
<iframe width="560" height="315" src="https://www.youtube.com/embed/o5UohDfgK5g?si=UQbaf4jkkrXz8VVR" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
-

@ 0c503f08:4aed05c7
2025-03-06 21:28:16
My host is Debian and I'm using VirtualBox. Everything seems to be working well.
originally posted at https://stacker.news/items/906016
-

@ d6c48950:54d57756
2025-03-06 21:20:45
I wanted to write my system for bitcoin inheritance and seed storage that will likely outlive me - the reason why is recently bitkey (squares hardware wallet) announced their inheritance system which is a vast improvement but still has a single point of failure square and the app they maintain though this is still a good thing and will improve the ecosystem and raise awareness there is a cheaper method that is just a secure but doesn’t have a single point of failure.
## 2/3 seed storage
2/3 seed storage is actually a pretty simple way of splitting up a key into three parts, if you have one part it’s useless, if you have any two parts it’s complete - if one piece is destroyed it doesn’t matter (demo below)
| A<br/> | B<br/> | C<br/> |
|-----|-----|-----|
| 1. apple<br/> | 2. zipper<br/> | 3. dog<br/> |
| 4. tree<br/> | 5. car<br/> | 6. bus<br/> |
| 7. banana<br/> | 8. motorbike<br/> | 9. dune<br/> |
| 10. frank<br/> | 11. foundation<br/> | 12. meditation<br/> |
| 13. whiteboard<br/> | 14. laptop<br/> | 15. books<br/> |
| 16. perfume<br/> | 17. computer<br/> | 18. stone<br/> |
| 19. brick<br/> | 20. spreadsheet<br/> | 21. bird<br/> |
| 22. blog<br/> | 23. leaves<br/> | 24. grass<br/> |
This is a seed phrase split up into three parts (a,b,c) - now you can create your 3 parts
(1)
| A<br/> | B<br/> | |
|-----|-----|-----|
| 1. apple<br/> | 2. zipper<br/> | |
| 4. tree<br/> | 5. car<br/> | |
| 7. banana<br/> | 8. motorbike<br/> | |
| 10. frank<br/> | 11. foundation<br/> | |
| 13. whiteboard<br/> | 14. laptop<br/> | |
| 16. perfume<br/> | 17. computer<br/> | |
| 19. brick<br/> | 20. spreadsheet<br/> | |
| 22. blog<br/> | 23. leaves<br/> | |
(2)
| | B<br/> | C<br/> |
|-----|-----|-----|
| | 2. zipper<br/> | 3. dog<br/> |
| | 5. car<br/> | 6. bus<br/> |
| | 8. motorbike<br/> | 9. dune<br/> |
| | 11. foundation<br/> | 12. meditation<br/> |
| | 14. laptop<br/> | 15. books<br/> |
| | 17. computer<br/> | 18. stone<br/> |
| | 20. spreadsheet<br/> | 21. bird<br/> |
| | 23. leaves<br/> | 24. grass<br/> |
(3)
| A<br/> | | C<br/> |
|-----|-----|-----|
| 1. apple<br/> | | 3. dog<br/> |
| 4. tree<br/> | | 6. bus<br/> |
| 7. banana<br/> | | 9. dune<br/> |
| 10. frank<br/> | | 12. meditation<br/> |
| 13. whiteboard<br/> | | 15. books<br/> |
| 16. perfume<br/> | | 18. stone<br/> |
| 19. brick<br/> | | 21. bird<br/> |
| 22. blog<br/> | | 24. grass<br/> |
Now you have your parts, you need at least 2/3 for it to be useful.
## distribution
Distribution is pretty simple, keep one part, give a part to whomever you want to be able to claim your bitcoin upon death, give a part to someone you trust (along with instructions to post it to the claimant upon your death).
## failure
For this to fail either
1. Two out of three parts would have to be destroyed
2. The trusted party would have to not post it AND either your part or the claimants would have to be destroyed
3. The trusted party cannot figure out how to use a seed phrase (by default you should include instructions i.e NEVER SHARE THE SEED, transfer to a recommended wallet from bitcoin.org then transfer to an exchange and sell)
-

@ 43baaf0c:d193e34c
2025-03-06 20:55:27
Bangkok art city.

Bangkok is a highly creative city, which is one of the reasons I love living here. I’d love to hold a second exhibition something special and even bigger than before. The fact that all major galleries are free to the public says a lot about how much Bangkok values art.
<iframe width="560" height="315" src="https://www.youtube.com/embed/6ddpEoSn_os?si=YKg6JRcsta1oNY9b" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
Over the last five months, I’ve been developing BangPOP art as both a concept and a blueprint for exhibitions worldwide. It serves as a guideline to ensure recognizable elements in each exhibition or event. While the artwork itself will always be unique, the POP Up exhibitions will have a distinct and recognizable identity wherever they take place.

You can read here the https://bitpopart.com/bangpop POP exhibition blue print.

Unfortunately, my plan to hold an exhibition at River City in Bangkok doesn’t seem to be coming together. Here’s the curator’s note:
‘our exhibition schedule on the 2nd floor this year and next year are quite packed and we have received numerous proposals at this moment.‘

After considering alternative venues in Bangkok, I’m optimistic about finding the right fit. For now, my focus is shifting to Europe, where I’ll use the BangPOP blueprint as my guiding framework.


Thank you Bangkok!

-

@ f3873798:24b3f2f3
2025-03-06 19:21:36
Olá pessoal!
Estas altas temperaturas mais períodos de chuvas, é um prato cheio para a proliferação de mosquitos o que além de ser um incomodo pode ser tornar questão de saúde pública devido as doenças trazidas por eles.
Por isso trouxe para vocês uma receitinha para espantar os mosquitos da sua casa.
Bora lá
Receita Espanta mosquito
Ingredientes:
500 mL de álcool de cereais ou álcool 70 %
10g de cravo-da-índia.
10 gotas de óleo essencial de citronela
10 gotas de óleo essencial de lavanda
1 borrifador
Modo de preparo
1. Coloque os cravos-da-índia dentro de um frasco com álcool de cereais.
2. Deixe a mistura descansar por pelo menos 24 horas, agitando de vez em quando.
3. Após esse período, coe os cravos e adicione os óleos essenciais de citronela e lavanda.
4. Despeje o líquido em um borrifador e aplique nos cômodos da casa, especialmente perto de janelas e portas.
E Adeus mosquitos 😎
-

@ 15a592c4:e8bdd024
2025-03-06 18:59:13
"Suffering has been stronger than all other teaching, and has taught me to understand what your heart used to be. I have been bent and broken, but – I hope – into a better shape." – Charles Dickens
Suffering is an inevitable part of life, and it can be a transformative experience that teaches us valuable lessons. While it's natural to try to avoid pain and hardship, embracing suffering as a teacher can help us grow, learn, and rise above our challenges. In this article, we'll explore the lessons that only suffering can teach.
Lesson;
1. Resilience and Adaptability
Suffering teaches us to be resilient and adaptable in the face of adversity. When we're forced to navigate difficult circumstances, we learn to adjust our expectations, priorities, and strategies. This adaptability helps us develop coping mechanisms and bounce back from setbacks.
2: Empathy and Compassion
Suffering gives us a deeper understanding of others who are struggling. When we've experienced pain and hardship ourselves, we're more likely to empathize with others who are going through similar challenges. This empathy fosters compassion, kindness, and a stronger sense of community.
3: Gratitude and Appreciation
Suffering helps us appreciate the good things in life and cultivate gratitude. When we've faced hardship, we're more likely to cherish the people, experiences, and moments that bring us joy. This gratitude shifts our focus from what's lacking to what we already have.
4: Self-Awareness and Introspection
Suffering prompts us to look inward and confront our own strengths, weaknesses, and motivations. Through introspection, we gain a deeper understanding of ourselves, our values, and our goals. This self-awareness helps us make positive changes and grow as individuals.
5: Hope and Perseverance
Suffering teaches us to hold onto hope, even in the darkest moments. When we've faced adversity and come out the other side, we develop a sense of perseverance that helps us push through challenges. This hope and perseverance give us the strength to keep moving forward, even when the road ahead seems uncertain.
Conclusion
Suffering is an inevitable part of life, but it can also be a transformative teacher. By embracing the lessons that suffering can teach us, we can rise above our challenges, grow as individuals, and develop the resilience, empathy, gratitude, self-awareness, and hope that we need to navigate life's ups and downs.
-

@ 15a592c4:e8bdd024
2025-03-06 18:51:27
*Strategies for Effective Time Management and Productivity*
In today's fast-paced world, managing time effectively is crucial for achieving success in both personal and professional life. With numerous tasks competing for our attention, it's easy to get bogged down and lose focus. However, by implementing the right strategies, you can optimize your time management skills, boost productivity, and accomplish your goals.
*Set Clear Goals*
Before you can manage your time effectively, you need to know what you want to achieve. Setting clear goals helps you focus on what's truly important and allocate your time accordingly. Try to set SMART (Specific, Measurable, Achievable, Relevant, and Time-bound) goals that align with your priorities.
*Use a Scheduling Tool*
A scheduling tool, such as a planner, calendar, or app, helps you organize your tasks, appointments, and deadlines. Write down all your tasks, big and small, and allocate specific time slots for each. Set reminders and notifications to ensure you stay on track.
*Prioritize Tasks*
Not all tasks are created equal. Prioritize tasks based on their urgency and importance. Use the Eisenhower Matrix to categorize tasks into four quadrants:
1. Urgent and important (Do first)
2. Important but not urgent (Schedule)
3. Urgent but not important (Delegate)
4. Not urgent or important (Eliminate)
*Avoid Multitasking*
Multitasking may seem like an efficient way to get things done, but it can actually decrease productivity and increase stress. Focus on one task at a time, and give it your undivided attention.
*Manage Distractions*
Distractions are everywhere, from social media to email notifications. Identify your most significant distractions and eliminate them while you work. Use tools like website blockers or apps that help you stay focused.
*Take Breaks*
Taking regular breaks can help you recharge and maintain productivity. Use the Pomodoro Technique: work for 25 minutes, take a 5-minute break, and repeat.
*Learn to Say No*
Don't take on too much by trying to please everyone. Learn to say no to tasks that don't align with your goals or values. Remember, saying no to something that doesn't serve you means saying yes to yourself.
*Review and Adjust*
Regularly review your time management strategy to identify areas for improvement. Adjust your schedule, goals, and habits as needed.
Effective time management and productivity require discipline, intention, and strategy. By implementing these strategies, you'll be able to:
- Achieve your goals
- Reduce stress and anxiety
- Increase productivity and efficiency
- Enjoy a better work-life balance
Remember, time management is a skill that takes practice, so be patient and persistent. With the right strategies and mindset, you can master your time and unlock your potentials...
-

@ 97c70a44:ad98e322
2025-03-06 18:38:10
When developing on nostr, normally it's enough to read the NIP related to a given feature you want to build to know what has to be done. But there are some aspects of nostr development that aren't so straightforward because they depend less on specific data formats than on how different concepts are combined.
An example of this is how for a while it was considered best practice to re-publish notes when replying to them. This practice emerged before the outbox model gained traction, and was a hacky way of attempting to ensure relays had the full context required for a given note. Over time though, pubkey hints emerged as a better way to ensure other clients could find required context.
Another one of these things is "relay-based groups", or as I prefer to call it "relays-as-groups" (RAG). Such a thing doesn't really exist - there's no spec for it (although some _aspects_ of the concept are included in NIP 29), but at the same time there are two concrete implementations (Flotilla and Chachi) which leverage several different NIPs in order to create a cohesive system for groups on nostr.
This composability is one of the neat qualities of nostr. Not only would it be unhelpful to specify how different parts of the protocol should work together, it would be impossible because of the number of possible combinations possible just from applying a little bit of common sense to the NIPs repo. No one said it was ok to put `t` tags on a `kind 0`. But no one's stopping you! And the semantics are basically self-evident if you understand its component parts.
So, instead of writing a NIP that sets relay-based groups in stone, I'm writing this guide in order to document how I've combined different parts of the nostr protocol to create a compelling architecture for groups.
## Relays
Relays already have a canonical identity, which is the relay's url. Events posted to a relay can be thought of as "posted to that group". This means that every relay is already a group. All nostr notes have already been posted to one or more groups.
One common objection to this structure is that identifying a group with a relay means that groups are dependent on the relay to continue hosting the group. In normal broadcast nostr (which forms organic permissionless groups based on user-centric social clustering), this is a very bad thing, because hosts are orthogonal to group identity. Communities are completely different. Communities actually need someone to enforce community boundaries, implement moderation, etc. Reliance on a host is a feature, not a bug (in contrast to NIP 29 groups, which tend to co-locate many groups on a single host, relays-as-groups tends to encourage one group, one host).
This doesn't mean that federation, mirrors, and migration can't be accomplished. In a sense, leaving this on the social layer is a good thing, because it adds friction to the dissolution/forking of a group. But the door is wide open to protocol additions to support those use cases for relay-based groups. One possible approach would be to follow [this draft PR](https://github.com/coracle-social/nips/blob/60179dfba2a51479c569c9192290bb4cefc660a8/xx.md#federation) which specified a "federation" event relays could publish on their own behalf.
## Relay keys
[This draft PR to NIP 11](https://github.com/nostr-protocol/nips/pull/1764) specifies a `self` field which represents the relay's identity. Using this, relays can publish events on their own behalf. Currently, the `pubkey` field sort of does the same thing, but is overloaded as a contact field for the owner of the relay.
## AUTH
Relays can control access using [NIP 42 AUTH](https://github.com/nostr-protocol/nips/blob/master/42.md). There are any number of modes a relay can operate in:
1. No auth, fully public - anyone can read/write to the group.
2. Relays may enforce broad or granular access controls with AUTH.
Relays may deny EVENTs or REQs depending on user identity. Messages returned in AUTH, CLOSED, or OK messages should be human readable. It's crucial that clients show these error messages to users. Here's how Flotilla handles failed AUTH and denied event publishing:

[LIMITS](https://github.com/nostr-protocol/nips/pull/1434) could also be used in theory to help clients adapt their interface depending on user abilities and relay policy.
3. AUTH with implicit access controls.
In this mode, relays may exclude matching events from REQs if the user does not have permission to view them. This can be useful for multi-use relays that host hidden rooms. This mode should be used with caution, because it can result in confusion for the end user.
See [Triflector](https://github.com/coracle-social/triflector) for a relay implementation that supports some of these auth policies.
## Invite codes
If a user doesn't have access to a relay, they can request access using [this draft NIP](https://github.com/nostr-protocol/nips/pull/1079). This is true whether access has been explicitly or implicitly denied (although users will have to know that they should use an invite code to request access).
The above referenced NIP also contains a mechanism for users to request an invite code that they can share with other users.
The policy for these invite codes is entirely up to the relay. They may be single-use, multi-use, or require additional verification. Additional requirements can be communicated to the user in the OK message, for example directions to visit an external URL to register.
See [Triflector](https://github.com/coracle-social/triflector) for a relay implementation that supports invite codes.
## Content
Any kind of event can be published to a relay being treated as a group, unless rejected by the relay implementation. In particular, [NIP 7D](https://github.com/nostr-protocol/nips/blob/master/7D.md) was added to support basic threads, and [NIP C7](https://github.com/nostr-protocol/nips/blob/master/C7.md) for chat messages.
Since which relay an event came from determines which group it was posted to, clients need to have a mechanism for keeping track of which relay they received an event from, and should not broadcast events to other relays (unless intending to cross-post the content).
## Rooms
Rooms follow [NIP 29](https://github.com/nostr-protocol/nips/blob/master/29.md). I wish NIP 29 wasn't called "relay based groups", which is very confusing when talking about "relays as groups". It's much better to think of them as sub-groups, or as Flotilla calls them, "rooms".
Rooms have two modes - managed and unmanaged. Managed rooms follow all the rules laid out in NIP 29 about metadata published by the relay and user membership. In either case, rooms are represented by a random room id, and are posted to by including the id in an event's `h` tag. This allows rooms to switch between managed and unmanaged modes without losing any content.
Managed room names come from `kind 39000` room meta events, but unmanaged rooms don't have these. Instead, room names should come from members' NIP 51 `kind 10009` membership lists. Tags on these lists should look like this: `["group", "groupid", "wss://group.example.com", "Cat lovers"]`. If no name can be found for the room (i.e., there aren't any members), the room should be ignored by clients.
Rooms present a difficulty for publishing to the relay as a whole, since content with an `h` tag can't be excluded from requests. Currently, relay-wide posts are h-tagged with `_` which works for "group" clients, but not more generally. I'm not sure how to solve this other than to ask relays to support negative filters.
## Cross-posting
The simplest way to cross-post content from one group (or room) to another, is to quote the original note in whatever event kind is appropriate. For example, a blog post might be quoted in a `kind 9` to be cross-posted to chat, or in a `kind 11` to be cross-posted to a thread. `kind 16` reposts can be used the same way if the reader's client renders reposts.
Posting the original event to multiple relays-as-groups is trivial, since all you have to do is send the event to the relay. Posting to multiple rooms simultaneously by appending multiple `h` tags is however not recommended, since group relays/clients are incentivised to protect themselves from spam by rejecting events with multiple `h` tags (similar to how events with multiple `t` tags are sometimes rejected).
## Privacy
Currently, it's recommended to include a [NIP 70](https://github.com/nostr-protocol/nips/blob/master/70.md) `-` tag on content posted to relays-as-groups to discourage replication of relay-specific content across the network.
Another slightly stronger approach would be for group relays to strip signatures in order to make events invalid (or at least deniable). For this approach to work, users would have to be able to signal that they trust relays to be honest. We could also [use ZkSNARKS](https://github.com/nostr-protocol/nips/pull/1682) to validate signatures in bulk.
In any case, group posts should not be considered "private" in the same way E2EE groups might be. Relays-as-groups should be considered a good fit for low-stakes groups with many members (since trust deteriorates quickly as more people get involved).
## Membership
There is currently no canonical member list published by relays (except for NIP 29 managed rooms). Instead, users keep track of their own relay and room memberships using `kind 10009` lists. Relay-level memberships are represented by an `r` tag containing the relay url, and room-level memberships are represented using a `group` tag.
Users can choose to advertise their membership in a RAG by using unencrypted tags, or they may keep their membership private by using encrypted tags. Advertised memberships are useful for helping people find groups based on their social graph:

User memberships should not be trusted, since they can be published unilaterally by anyone, regardless of actual access. Possible improvements in this area would be the ability to provide proof of access:
- Relays could publish member lists (although this would sacrifice member privacy)
- Relays could support a new command that allows querying a particular member's access status
- Relays could provide a proof to the member that they could then choose to publish or not
## Moderation
There are two parts to moderation: reporting and taking action based on these reports.
Reporting is already covered by [NIP 56](https://github.com/nostr-protocol/nips/blob/master/56.md). Clients should be careful about encouraging users to post reports for illegal content under their own identity, since that can itself be illegal. Relays also should not serve reports to users, since that can be used to _find_ rather than address objectionable content.
Reports are only one mechanism for flagging objectionable content. Relay operators and administrators can use whatever heuristics they like to identify and address objectionable content. This might be via automated policies that auto-ban based on reports from high-reputation people, a client that implements [NIP 86](https://github.com/nostr-protocol/nips/blob/master/86.md) relay management API, or by some other admin interface.
There's currently no way for moderators of a given relay to be advertised, or for a moderator's client to know that the user is a moderator (so that they can enable UI elements for in-app moderation). This could be addressed via [NIP 11](https://github.com/nostr-protocol/nips/blob/master/11.md), [LIMITS](https://github.com/nostr-protocol/nips/pull/1434), or some other mechanism in the future.
## General best practices
In general, it's very important when developing a client to assume that the relay has _no_ special support for _any_ of the above features, instead treating all of this stuff as [progressive enhancement](https://developer.mozilla.org/en-US/docs/Glossary/Progressive_Enhancement).
For example, if a user enters an invite code, go ahead and send it to the relay using a `kind 28934` event. If it's rejected, you know that it didn't work. But if it's accepted, you don't know that it worked - you only know that the relay allowed the user to publish that event. This is helpful, becaues it may imply that the user does indeed have access to the relay. But additional probing may be needed, and reliance on error messages down the road when something else fails unexpectedly is indispensable.
This paradigm may drive some engineers nuts, because it's basically equivalent to coding your clients to reverse-engineer relay support for every feature you want to use. But this is true of nostr as a whole - anyone can put whatever weird stuff in an event and sign it. Clients have to be extremely compliant with Postell's law - doing their absolute best to accept whatever weird data or behavior shows up and handle failure in any situation. Sure, it's annoying, but it's the cost of permissionless development. What it gets us is a completely open-ended protocol, in which anything can be built, and in which every solution is tested by the market.
-

@ 2ed3596e:98b4cc78
2025-03-06 18:21:53
Americans can now Dollar Cost Average (DCA) bitcoin directly from their bank straight to self-custody! This first-of-its-kind product is the safest way to buy bitcoin on a schedule. We call it **Recurring Buy**.
When you set up a Recurring Buy, we handle the entire buying process for you. The journey from dollars in your bank to bitcoin in self-custody is seamless and eliminates the risk of having money sit in a balance at a bitcoin exchange.
Bitcoin Well will automatically pull dollars from your bank, convert them to bitcoin and send your (real) bitcoin directly to your personal wallet. All Recurring Buy transactions have no added fees and we also pay your miner fee. We apply our standard 1.2% spread with no other fees. Your path to financial independence just got automated! 🗓️🤖
To set up your bitcoin Recurring Buy, go to the [Buy bitcoin page](https://app.bitcoinwell.com/usa/buy) and set your purchase size, wallet destination and purchase frequency. That’s it! Watch your self-custody bitcoin wallet fill up with bitcoin. \
\
Below are detailed instructions on how to DCA bitcoin to your personal bitcoin wallet; automatically and on your schedule.
## **Set your transaction details**
Go to your [Buy bitcoin page](https://app.bitcoinwell.com/usa/buy) where you’ll see four options to set your bitcoin Recurring Buy: Amount, Source, Destination and Frequency.
You can set up to five unique Recurring Buys. This enables you to buy different amounts of bitcoin on different time frames concurrently, all while being sent directly to self-custody for *free* 🤯
<img src="https://blossom.primal.net/49ccbdf5992af9a1ccf3ab2e6dccea33d53f44eb8e31f7bb67abb650518b7d8d.png">
\
**Amount**: Select the amount of dollars you want to convert into bitcoin. \
\
**Source**: The bank to pull dollars.\
\
**Destination**: Your personal bitcoin wallet. Bitcoin Well automatically converts incoming dollars to bitcoin immediately when they are received. Your bitcoin will be [automatically batched and sent to you for free](https://bitcoinwell.com/blog/bitcoin-transactions-are-now-batched-in-the-usa-heres-what-that-means-for-you).
**Frequency**: Your purchase frequency is set to ‘One time’ by default so you can smash buy.
You can set your payment frequency to weekly, biweekly or monthly. When setting your frequency, choose the start date: you can select “Today” or navigate the calendar to choose a later start date.
<img src="https://blossom.primal.net/8023dccf220ca22bcebc7e10ec727311e33a3e19b9fc795ee6a90df7f98317fd.png">
\
Once you’ve set up your Recurring Buy, select “Review” and then “Confirm” to activate your bitcoin Recurring Buy. The best bitcoin Recurring Buy in the USA is here 🐐
## **Pausing, cancelling or changing your Recurring Buy**
\
You can easily pause or cancel any of your Recurring Buys from the Buy bitcoin page. All of your Recurring Buys will be shown on the right-hand side of your Buy bitcoin page on desktop or at the bottom of your Buy bitcoin page on mobile. \
\
To pause a Recurring Buy, click the pause button within the Recurring Buy preview in your Buy bitcoin page. Similarly, to cancel a Recurring Buy, click the ‘Cancel’ button within the Recurring Buy preview. \
\
To replace an active Recurring Buy, simply cancel your active Recurring Buy as described above and then set up a new Recurring Buy with your new desired amount and frequency. For example: you want to cancel an existing biweekly $200 Recurring Buy with a weekly $100 Recurring Buy. Cancel the existing biweekly $200 Recurring buy, then set up a new weekly $100 Recurring Buy as described in **Set your transaction details**.
<img src="https://blossom.primal.net/da4919fc378950a29d7bbb05144323b64c0673fbfb846fb3647188f3e817db1d.png">
\
As always, your bitcoin is automatically purchased at the current market rate when your dollars arrive. Additionally, your bitcoin will be batched and sent out on the blockchain *for free* by default.
## **Earn sats from your bitcoin transactions**
\
Bitcoin Well is also the best place in the world to earn bitcoin. When you earn points in your Bitcoin Well account, you gain the opportunity to play the Bitcoin (Wishing) Well, where you win sats with every play.
The best part? We send bitcoin that you win straight to your personal wallet via the Lightning Network ⚡
Oh yeah, did we mention you can win 1,000,000 sats? If you're an active Bitcoin Well customer, there is a chance you've earned a pile of points. The more you use your account for buying, selling or spending bitcoin - the more points you’ll earn! Log in to your Bitcoin Well account and [check your point balance](https://app.bitcoinwell.com/reward-points).
## **About Bitcoin Well**
\
Bitcoin Well exists to enable independence. We do this by coupling the convenience of modern banking, with the benefits of bitcoin. In other words, we make it easy to use bitcoin with self-custody.
-

@ 5b0183ab:a114563e
2025-03-06 17:38:10
### What Is Dark Nostr?
Dark Nostr can be described as the unintended adverse effects that arise from creating systems designed to resist censorship and promote freedom. These systems often rely on algorithms and micropayments to function, but their very design can inadvertently spawn phenomena that are unpredictable, uncontrollable, and sometimes downright weird.
Think of it as the *Yin* to the *Yang* of decentralized freedom—a necessary shadow cast by the bright ideals of liberation. While freedom protocols aim to empower individuals, they also open the door to consequences that aren’t always sunshine and rainbows.
---
### An Emergent Phenomenon
The fascinating thing about Dark Nostr is its emergent nature. This means it’s not something you can fully define or predict ahead of time; instead, it arises organically as decentralized systems are implemented and evolve. Like watching clouds form shapes in the sky, GM miners panhandle for sats or shower girls in the global feed, you can only observe it as it happens—and even then, its contours remain elusive.
Emergent phenomena are tricky beasts. While simplicity is at the core of the protocol layer darkness is born on the edge where complexity thrives—where individual components interact in ways that produce unpredictable outcomes. In this case, Dark Nostr encapsulates everything from algorithmic quirks and micropayment dynamics to unforeseen social consequences within decentralized ecosystems.
---
### Studying Dark Nostr: Memes as Cultural Artifacts
Here’s where things get anthropologically juicy: much of what we know about Dark Nostr comes not from academic papers or technical manuals but from memes. Yes, memes—the internet’s favorite medium for cultural commentary—have become a lens through which this phenomenon is being observed and studied.
Memes act as modern-day hieroglyphs, distilling complex ideas into bite-sized cultural artifacts that reflect collective sentiment. When communities encounter something as nebulous as Dark Nostr, they turn to humor and symbolism to make sense of it. In doing so, they create a shared narrative—a way to grapple with the shadow side of decentralization without losing sight of its promise.
---
### Why Does It Matter?
Dark Nostr isn’t just an abstract concept for philosophers or tech enthusiasts—it’s a reminder that every innovation comes with trade-offs. While decentralized systems aim to empower individuals by resisting censorship and central control, they also carry risks that must be acknowledged:
- Algorithmic Chaos: Algorithms designed for freedom might amplify harmful content or create echo chambers.
- Micropayment Pitfalls: Financial incentives could lead to exploitation or manipulation within these systems.
- Social Dynamics: The lack of centralized control might enable bad actors or foster unforeseen societal shifts.
Understanding Dark Nostr is crucial for anyone involved in building or using decentralized technologies. It challenges us to balance freedom with responsibility and reminds us that even the most well-intentioned systems have their shadow side.
---
### Conclusion: Embracing the Shadow
Dark Nostr is more than just a cautionary tale—it’s a fascinating glimpse into the complexities of human interaction with technology. As an emergent phenomenon, it invites us to remain vigilant and adaptive as we navigate the uncharted waters of decentralization.
By studying its manifestations through cultural artifacts like memes and engaging in thoughtful reflection, we can better prepare for both its opportunities and risks. After all, every great innovation needs its shadow—it’s what makes progress real, messy, and human.
So here we stand before Dark Nostr: may we study it wisely, meme it relentlessly, and learn from its lessons as we build the future together.
Stay Vigilent Nostr.....