-
Bitcoin is more than money, more than an asset, and more than a store of value. Bitcoin is a Prime Mover, an enabler and it ignites imaginations. It certainly fueled an idea in my mind. The idea integrates sensors, computational prowess, actuated machinery, power conversion, and electronic communications to form an autonomous, machined creature roaming forests and harvesting the most widespread and least energy-dense fuel source available. I call it the Forest Walker and it eats wood, and mines Bitcoin.
I know what you're thinking. Why not just put Bitcoin mining rigs where they belong: in a hosted facility sporting electricity from energy-dense fuels like natural gas, climate-controlled with excellent data piping in and out? Why go to all the trouble building a robot that digests wood creating flammable gasses fueling an engine to run a generator powering Bitcoin miners? It's all about synergy.
Bitcoin mining enables the realization of multiple, seemingly unrelated, yet useful activities. Activities considered un-profitable if not for Bitcoin as the Prime Mover. This is much more than simply mining the greatest asset ever conceived by humankind. It’s about the power of synergy, which Bitcoin plays only one of many roles. The synergy created by this system can stabilize forests' fire ecology while generating multiple income streams. That’s the realistic goal here and requires a brief history of American Forest management before continuing.
# Smokey The Bear
In 1944, the Smokey Bear Wildfire Prevention Campaign began in the United States. “Only YOU can prevent forest fires” remains the refrain of the Ad Council’s longest running campaign. The Ad Council is a U.S. non-profit set up by the American Association of Advertising Agencies and the Association of National Advertisers in 1942. It would seem that the U.S. Department of the Interior was concerned about pesky forest fires and wanted them to stop. So, alongside a national policy of extreme fire suppression they enlisted the entire U.S. population to get onboard via the Ad Council and it worked. Forest fires were almost obliterated and everyone was happy, right? Wrong.
Smokey is a fantastically successful bear so forest fires became so few for so long that the fuel load - dead wood - in forests has become very heavy. So heavy that when a fire happens (and they always happen) it destroys everything in its path because the more fuel there is the hotter that fire becomes. Trees, bushes, shrubs, and all other plant life cannot escape destruction (not to mention homes and businesses). The soil microbiology doesn’t escape either as it is burned away even in deeper soils. To add insult to injury, hydrophobic waxy residues condense on the soil surface, forcing water to travel over the ground rather than through it eroding forest soils. Good job, Smokey. Well done, Sir!
Most terrestrial ecologies are “fire ecologies”. Fire is a part of these systems’ fuel load and pest management. Before we pretended to “manage” millions of acres of forest, fires raged over the world, rarely damaging forests. The fuel load was always too light to generate fires hot enough to moonscape mountainsides. Fires simply burned off the minor amounts of fuel accumulated since the fire before. The lighter heat, smoke, and other combustion gasses suppressed pests, keeping them in check and the smoke condensed into a plant growth accelerant called wood vinegar, not a waxy cap on the soil. These fires also cleared out weak undergrowth, cycled minerals, and thinned the forest canopy, allowing sunlight to penetrate to the forest floor. Without a fire’s heat, many pine tree species can’t sow their seed. The heat is required to open the cones (the seed bearing structure) of Spruce, Cypress, Sequoia, Jack Pine, Lodgepole Pine and many more. Without fire forests can’t have babies. The idea was to protect the forests, and it isn't working.
So, in a world of fire, what does an ally look like and what does it do?
# Meet The Forest Walker
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817510192-YAKIHONNES3.png)
For the Forest Walker to work as a mobile, autonomous unit, a solid platform that can carry several hundred pounds is required. It so happens this chassis already exists but shelved.
Introducing the Legged Squad Support System (LS3). A joint project between Boston Dynamics, DARPA, and the United States Marine Corps, the quadrupedal robot is the size of a cow, can carry 400 pounds (180 kg) of equipment, negotiate challenging terrain, and operate for 24 hours before needing to refuel. Yes, it had an engine. Abandoned in 2015, the thing was too noisy for military deployment and maintenance "under fire" is never a high-quality idea. However, we can rebuild it to act as a platform for the Forest Walker; albeit with serious alterations. It would need to be bigger, probably. Carry more weight? Definitely. Maybe replace structural metal with carbon fiber and redesign much as 3D printable parts for more effective maintenance.
The original system has a top operational speed of 8 miles per hour. For our purposes, it only needs to move about as fast as a grazing ruminant. Without the hammering vibrations of galloping into battle, shocks of exploding mortars, and drunken soldiers playing "Wrangler of Steel Machines", time between failures should be much longer and the overall energy consumption much lower. The LS3 is a solid platform to build upon. Now it just needs to be pulled out of the mothballs, and completely refitted with outboard equipment.
# The Small Branch Chipper
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817558159-YAKIHONNES3.png)
When I say “Forest fuel load” I mean the dead, carbon containing litter on the forest floor. Duff (leaves), fine-woody debris (small branches), and coarse woody debris (logs) are the fuel that feeds forest fires. Walk through any forest in the United States today and you will see quite a lot of these materials. Too much, as I have described. Some of these fuel loads can be 8 tons per acre in pine and hardwood forests and up to 16 tons per acre at active logging sites. That’s some big wood and the more that collects, the more combustible danger to the forest it represents. It also provides a technically unlimited fuel supply for the Forest Walker system.
The problem is that this detritus has to be chewed into pieces that are easily ingestible by the system for the gasification process (we’ll get to that step in a minute). What we need is a wood chipper attached to the chassis (the LS3); its “mouth”.
A small wood chipper handling material up to 2.5 - 3.0 inches (6.3 - 7.6 cm) in diameter would eliminate a substantial amount of fuel. There is no reason for Forest Walker to remove fallen trees. It wouldn’t have to in order to make a real difference. It need only identify appropriately sized branches and grab them. Once loaded into the chipper’s intake hopper for further processing, the beast can immediately look for more “food”. This is essentially kindling that would help ignite larger logs. If it’s all consumed by Forest Walker, then it’s not present to promote an aggravated conflagration.
I have glossed over an obvious question: How does Forest Walker see and identify branches and such? LiDaR (Light Detection and Ranging) attached to Forest Walker images the local area and feed those data to onboard computers for processing. Maybe AI plays a role. Maybe simple machine learning can do the trick. One thing is for certain: being able to identify a stick and cause robotic appendages to pick it up is not impossible.
Great! We now have a quadrupedal robot autonomously identifying and “eating” dead branches and other light, combustible materials. Whilst strolling through the forest, depleting future fires of combustibles, Forest Walker has already performed a major function of this system: making the forest safer. It's time to convert this low-density fuel into a high-density fuel Forest Walker can leverage. Enter the gasification process.
# The Gassifier
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817765349-YAKIHONNES3.png)
The gasifier is the heart of the entire system; it’s where low-density fuel becomes the high-density fuel that powers the entire system. Biochar and wood vinegar are process wastes and I’ll discuss why both are powerful soil amendments in a moment, but first, what’s gasification?
Reacting shredded carbonaceous material at high temperatures in a low or no oxygen environment converts the biomass into biochar, wood vinegar, heat, and Synthesis Gas (Syngas). Syngas consists primarily of hydrogen, carbon monoxide, and methane. All of which are extremely useful fuels in a gaseous state. Part of this gas is used to heat the input biomass and keep the reaction temperature constant while the internal combustion engine that drives the generator to produce electrical power consumes the rest.
Critically, this gasification process is “continuous feed”. Forest Walker must intake biomass from the chipper, process it to fuel, and dump the waste (CO2, heat, biochar, and wood vinegar) continuously. It cannot stop. Everything about this system depends upon this continual grazing, digestion, and excretion of wastes just as a ruminal does. And, like a ruminant, all waste products enhance the local environment.
When I first heard of gasification, I didn’t believe that it was real. Running an electric generator from burning wood seemed more akin to “conspiracy fantasy” than science. Not only is gasification real, it’s ancient technology. A man named Dean Clayton first started experiments on gasification in 1699 and in 1901 gasification was used to power a vehicle. By the end of World War II, there were 500,000 Syngas powered vehicles in Germany alone because of fossil fuel rationing during the war. The global gasification market was $480 billion in 2022 and projected to be as much as $700 billion by 2030 (Vantage Market Research). Gasification technology is the best choice to power the Forest Walker because it’s self-contained and we want its waste products.
# Biochar: The Waste
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817802326-YAKIHONNES3.png)
Biochar (AKA agricultural charcoal) is fairly simple: it’s almost pure, solid carbon that resembles charcoal. Its porous nature packs large surface areas into small, 3 dimensional nuggets. Devoid of most other chemistry, like hydrocarbons (methane) and ash (minerals), biochar is extremely lightweight. Do not confuse it with the charcoal you buy for your grill. Biochar doesn’t make good grilling charcoal because it would burn too rapidly as it does not contain the multitude of flammable components that charcoal does. Biochar has several other good use cases. Water filtration, water retention, nutrient retention, providing habitat for microscopic soil organisms, and carbon sequestration are the main ones that we are concerned with here.
Carbon has an amazing ability to adsorb (substances stick to and accumulate on the surface of an object) manifold chemistries. Water, nutrients, and pollutants tightly bind to carbon in this format. So, biochar makes a respectable filter and acts as a “battery” of water and nutrients in soils. Biochar adsorbs and holds on to seven times its weight in water. Soil containing biochar is more drought resilient than soil without it. Adsorbed nutrients, tightly sequestered alongside water, get released only as plants need them. Plants must excrete protons (H+) from their roots to disgorge water or positively charged nutrients from the biochar's surface; it's an active process.
Biochar’s surface area (where adsorption happens) can be 500 square meters per gram or more. That is 10% larger than an official NBA basketball court for every gram of biochar. Biochar’s abundant surface area builds protective habitats for soil microbes like fungi and bacteria and many are critical for the health and productivity of the soil itself.
The “carbon sequestration” component of biochar comes into play where “carbon credits” are concerned. There is a financial market for carbon. Not leveraging that market for revenue is foolish. I am climate agnostic. All I care about is that once solid carbon is inside the soil, it will stay there for thousands of years, imparting drought resiliency, fertility collection, nutrient buffering, and release for that time span. I simply want as much solid carbon in the soil because of the undeniably positive effects it has, regardless of any climactic considerations.
# Wood Vinegar: More Waste
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817826910-YAKIHONNES3.png)
Another by-product of the gasification process is wood vinegar (Pyroligneous acid). If you have ever seen Liquid Smoke in the grocery store, then you have seen wood vinegar. Principally composed of acetic acid, acetone, and methanol wood vinegar also contains ~200 other organic compounds. It would seem intuitive that condensed, liquefied wood smoke would at least be bad for the health of all living things if not downright carcinogenic. The counter intuition wins the day, however. Wood vinegar has been used by humans for a very long time to promote digestion, bowel, and liver health; combat diarrhea and vomiting; calm peptic ulcers and regulate cholesterol levels; and a host of other benefits.
For centuries humans have annually burned off hundreds of thousands of square miles of pasture, grassland, forest, and every other conceivable terrestrial ecosystem. Why is this done? After every burn, one thing becomes obvious: the almost supernatural growth these ecosystems exhibit after the burn. How? Wood vinegar is a component of this growth. Even in open burns, smoke condenses and infiltrates the soil. That is when wood vinegar shows its quality.
This stuff beefs up not only general plant growth but seed germination as well and possesses many other qualities that are beneficial to plants. It’s a pesticide, fungicide, promotes beneficial soil microorganisms, enhances nutrient uptake, and imparts disease resistance. I am barely touching a long list of attributes here, but you want wood vinegar in your soil (alongside biochar because it adsorbs wood vinegar as well).
# The Internal Combustion Engine
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817852201-YAKIHONNES3.png)
Conversion of grazed forage to chemical, then mechanical, and then electrical energy completes the cycle. The ICE (Internal Combustion Engine) converts the gaseous fuel output from the gasifier to mechanical energy, heat, water vapor, and CO2. It’s the mechanical energy of a rotating drive shaft that we want. That rotation drives the electric generator, which is the heartbeat we need to bring this monster to life. Luckily for us, combined internal combustion engine and generator packages are ubiquitous, delivering a defined energy output given a constant fuel input. It’s the simplest part of the system.
The obvious question here is whether the amount of syngas provided by the gasification process will provide enough energy to generate enough electrons to run the entire system or not. While I have no doubt the energy produced will run Forest Walker's main systems the question is really about the electrons left over. Will it be enough to run the Bitcoin mining aspect of the system? Everything is a budget.
# CO2 Production For Growth
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817873011-YAKIHONNES3.png)
Plants are lollipops. No matter if it’s a tree or a bush or a shrubbery, the entire thing is mostly sugar in various formats but mostly long chain carbohydrates like lignin and cellulose. Plants need three things to make sugar: CO2, H2O and light. In a forest, where tree densities can be quite high, CO2 availability becomes a limiting growth factor. It’d be in the forest interests to have more available CO2 providing for various sugar formation providing the organism with food and structure.
An odd thing about tree leaves, the openings that allow gasses like the ever searched for CO2 are on the bottom of the leaf (these are called stomata). Not many stomata are topside. This suggests that trees and bushes have evolved to find gasses like CO2 from below, not above and this further suggests CO2 might be in higher concentrations nearer the soil.
The soil life (bacterial, fungi etc.) is constantly producing enormous amounts of CO2 and it would stay in the soil forever (eventually killing the very soil life that produces it) if not for tidal forces. Water is everywhere and whether in pools, lakes, oceans or distributed in “moist” soils water moves towards to the moon. The water in the soil and also in the water tables below the soil rise toward the surface every day. When the water rises, it expels the accumulated gasses in the soil into the atmosphere and it’s mostly CO2. It’s a good bet on how leaves developed high populations of stomata on the underside of leaves. As the water relaxes (the tide goes out) it sucks oxygenated air back into the soil to continue the functions of soil life respiration. The soil “breathes” albeit slowly.
The gasses produced by the Forest Walker’s internal combustion engine consist primarily of CO2 and H2O. Combusting sugars produce the same gasses that are needed to construct the sugars because the universe is funny like that. The Forest Walker is constantly laying down these critical construction elements right where the trees need them: close to the ground to be gobbled up by the trees.
# The Branch Drones
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817903556-YAKIHONNES3.png)
During the last ice age, giant mammals populated North America - forests and otherwise. Mastodons, woolly mammoths, rhinos, short-faced bears, steppe bison, caribou, musk ox, giant beavers, camels, gigantic ground-dwelling sloths, glyptodons, and dire wolves were everywhere. Many were ten to fifteen feet tall. As they crashed through forests, they would effectively cleave off dead side-branches of trees, halting the spread of a ground-based fire migrating into the tree crown ("laddering") which is a death knell for a forest.
These animals are all extinct now and forests no longer have any manner of pruning services. But, if we build drones fitted with cutting implements like saws and loppers, optical cameras and AI trained to discern dead branches from living ones, these drones could effectively take over pruning services by identifying, cutting, and dropping to the forest floor, dead branches. The dropped branches simply get collected by the Forest Walker as part of its continual mission.
The drones dock on the back of the Forest Walker to recharge their batteries when low. The whole scene would look like a grazing cow with some flies bothering it. This activity breaks the link between a relatively cool ground based fire and the tree crowns and is a vital element in forest fire control.
# The Bitcoin Miner
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817919076-YAKIHONNES3.png)
Mining is one of four monetary incentive models, making this system a possibility for development. The other three are US Dept. of the Interior, township, county, and electrical utility company easement contracts for fuel load management, global carbon credits trading, and data set sales. All the above depends on obvious questions getting answered. I will list some obvious ones, but this is not an engineering document and is not the place for spreadsheets. How much Bitcoin one Forest Walker can mine depends on everything else. What amount of biomass can we process? Will that biomass flow enough Syngas to keep the lights on? Can the chassis support enough mining ASICs and supporting infrastructure? What does that weigh and will it affect field performance? How much power can the AC generator produce?
Other questions that are more philosophical persist. Even if a single Forest Walker can only mine scant amounts of BTC per day, that pales to how much fuel material it can process into biochar. We are talking about millions upon millions of forested acres in need of fuel load management. What can a single Forest Walker do? I am not thinking in singular terms. The Forest Walker must operate as a fleet. What could 50 do? 500?
What is it worth providing a service to the world by managing forest fuel loads? Providing proof of work to the global monetary system? Seeding soil with drought and nutrient resilience by the excretion, over time, of carbon by the ton? What did the last forest fire cost?
# The Mesh Network
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817962167-YAKIHONNES3.png)
What could be better than one bitcoin mining, carbon sequestering, forest fire squelching, soil amending behemoth? Thousands of them, but then they would need to be able to talk to each other to coordinate position, data handling, etc. Fitted with a mesh networking device, like goTenna or Meshtastic LoRa equipment enables each Forest Walker to communicate with each other.
Now we have an interconnected fleet of Forest Walkers relaying data to each other and more importantly, aggregating all of that to the last link in the chain for uplink. Well, at least Bitcoin mining data. Since block data is lightweight, transmission of these data via mesh networking in fairly close quartered environs is more than doable. So, how does data transmit to the Bitcoin Network? How do the Forest Walkers get the previous block data necessary to execute on mining?
# Back To The Chain
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736817983991-YAKIHONNES3.png)
Getting Bitcoin block data to and from the network is the last puzzle piece. The standing presumption here is that wherever a Forest Walker fleet is operating, it is NOT within cell tower range. We further presume that the nearest Walmart Wi-Fi is hours away. Enter the Blockstream Satellite or something like it.
A separate, ground-based drone will have two jobs: To stay as close to the nearest Forest Walker as it can and to provide an antennae for either terrestrial or orbital data uplink. Bitcoin-centric data is transmitted to the "uplink drone" via the mesh networked transmitters and then sent on to the uplink and the whole flow goes in the opposite direction as well; many to one and one to many.
We cannot transmit data to the Blockstream satellite, and it will be up to Blockstream and companies like it to provide uplink capabilities in the future and I don't doubt they will. Starlink you say? What’s stopping that company from filtering out block data? Nothing because it’s Starlink’s system and they could decide to censor these data. It seems we may have a problem sending and receiving Bitcoin data in back country environs.
But, then again, the utility of this system in staunching the fuel load that creates forest fires is extremely useful around forested communities and many have fiber, Wi-Fi and cell towers. These communities could be a welcoming ground zero for first deployments of the Forest Walker system by the home and business owners seeking fire repression. In the best way, Bitcoin subsidizes the safety of the communities.
# Sensor Packages
### LiDaR
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736818012307-YAKIHONNES3.png)
The benefit of having a Forest Walker fleet strolling through the forest is the never ending opportunity for data gathering. A plethora of deployable sensors gathering hyper-accurate data on everything from temperature to topography is yet another revenue generator. Data is valuable and the Forest Walker could generate data sales to various government entities and private concerns.
LiDaR (Light Detection and Ranging) can map topography, perform biomass assessment, comparative soil erosion analysis, etc. It so happens that the Forest Walker’s ability to “see,” to navigate about its surroundings, is LiDaR driven and since it’s already being used, we can get double duty by harvesting that data for later use. By using a laser to send out light pulses and measuring the time it takes for the reflection of those pulses to return, very detailed data sets incrementally build up. Eventually, as enough data about a certain area becomes available, the data becomes useful and valuable.
Forestry concerns, both private and public, often use LiDaR to build 3D models of tree stands to assess the amount of harvest-able lumber in entire sections of forest. Consulting companies offering these services charge anywhere from several hundred to several thousand dollars per square kilometer for such services. A Forest Walker generating such assessments on the fly while performing its other functions is a multi-disciplinary approach to revenue generation.
### pH, Soil Moisture, and Cation Exchange Sensing
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736818037057-YAKIHONNES3.png)
The Forest Walker is quadrupedal, so there are four contact points to the soil. Why not get a pH data point for every step it takes? We can also gather soil moisture data and cation exchange capacities at unheard of densities because of sampling occurring on the fly during commission of the system’s other duties. No one is going to build a machine to do pH testing of vast tracts of forest soils, but that doesn’t make the data collected from such an endeavor valueless. Since the Forest Walker serves many functions at once, a multitude of data products can add to the return on investment component.
### Weather Data
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736818057965-YAKIHONNES3.png)
Temperature, humidity, pressure, and even data like evapotranspiration gathered at high densities on broad acre scales have untold value and because the sensors are lightweight and don’t require large power budgets, they come along for the ride at little cost. But, just like the old mantra, “gas, grass, or ass, nobody rides for free”, these sensors provide potential revenue benefits just by them being present.
I’ve touched on just a few data genres here. In fact, the question for universities, governmental bodies, and other institutions becomes, “How much will you pay us to attach your sensor payload to the Forest Walker?”
# Noise Suppression
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736818076725-YAKIHONNES3.png)
Only you can prevent Metallica filling the surrounds with 120 dB of sound. Easy enough, just turn the car stereo off. But what of a fleet of 50 Forest Walkers operating in the backcountry or near a township? 500? 5000? Each one has a wood chipper, an internal combustion engine, hydraulic pumps, actuators, and more cooling fans than you can shake a stick at. It’s a walking, screaming fire-breathing dragon operating continuously, day and night, twenty-four hours a day, three hundred sixty-five days a year. The sound will negatively affect all living things and that impacts behaviors. Serious engineering consideration and prowess must deliver a silencing blow to the major issue of noise.
It would be foolish to think that a fleet of Forest Walkers could be silent, but if not a major design consideration, then the entire idea is dead on arrival. Townships would not allow them to operate even if they solved the problem of widespread fuel load and neither would governmental entities, and rightly so. Nothing, not man nor beast, would want to be subjected to an eternal, infernal scream even if it were to end within days as the fleet moved further away after consuming what it could. Noise and heat are the only real pollutants of this system; taking noise seriously from the beginning is paramount.
# Fire Safety
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736818111311-YAKIHONNES3.png)
A “fire-breathing dragon” is not the worst description of the Forest Walker. It eats wood, combusts it at very high temperatures and excretes carbon; and it does so in an extremely flammable environment. Bad mix for one Forest Walker, worse for many. One must take extreme pains to ensure that during normal operation, a Forest Walker could fall over, walk through tinder dry brush, or get pounded into the ground by a meteorite from Krypton and it wouldn’t destroy epic swaths of trees and baby deer. I envision an ultimate test of a prototype to include dowsing it in grain alcohol while it’s wrapped up in toilet paper like a pledge at a fraternity party. If it runs for 72 hours and doesn’t set everything on fire, then maybe outside entities won’t be fearful of something that walks around forests with a constant fire in its belly.
# The Wrap
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/6389be6491e7b693e9f368ece88fcd145f07c068d2c1bbae4247b9b5ef439d32/files/1736818144087-YAKIHONNES3.png)
How we think about what can be done with and adjacent to Bitcoin is at least as important as Bitcoin’s economic standing itself. For those who will tell me that this entire idea is without merit, I say, “OK, fine. You can come up with something, too.” What can we plug Bitcoin into that, like a battery, makes something that does not work, work? That’s the lesson I get from this entire exercise. No one was ever going to hire teams of humans to go out and "clean the forest". There's no money in that. The data collection and sales from such an endeavor might provide revenues over the break-even point but investment demands Alpha in this day and age. But, plug Bitcoin into an almost viable system and, voilà! We tip the scales to achieve lift-off.
Let’s face it, we haven’t scratched the surface of Bitcoin’s forcing function on our minds. Not because it’s Bitcoin, but because of what that invention means. The question that pushes me to approach things this way is, “what can we create that one system’s waste is another system’s feedstock?” The Forest Walker system’s only real waste is the conversion of low entropy energy (wood and syngas) into high entropy energy (heat and noise). All other output is beneficial to humanity.
Bitcoin, I believe, is the first product of a new mode of human imagination. An imagination newly forged over the past few millennia of being lied to, stolen from, distracted and otherwise mis-allocated to a black hole of the nonsensical. We are waking up.
What I have presented is not science fiction. Everything I have described here is well within the realm of possibility. The question is one of viability, at least in terms of the detritus of the old world we find ourselves departing from. This system would take a non-trivial amount of time and resources to develop. I think the system would garner extensive long-term contracts from those who have the most to lose from wildfires, the most to gain from hyperaccurate data sets, and, of course, securing the most precious asset in the world. Many may not see it that way, for they seek Alpha and are therefore blind to other possibilities. Others will see only the possibilities; of thinking in a new way, of looking at things differently, and dreaming of what comes next.
-
My blog posts and reading material have both been on a decidedly economics-heavy slant recently. The topic today, incentives, squarely falls into the category of economics. However, when I say economics, I’m not talking about “analyzing supply and demand curves.” I’m talking about the true basis of economics: understanding how human beings make decisions in a world of scarcity.
A fair definition of incentive is “a reward or punishment that motivates behavior to achieve a desired outcome.” When most people think about economic incentives, they’re thinking of money. If I offer my son $5 if he washes the dishes, I’m incentivizing certain behavior. We can’t guarantee that he’ll do what I want him to do, but we can agree that the incentive structure itself will guide and ultimately determine what outcome will occur.
The great thing about monetary incentives is how easy they are to talk about and compare. “Would I rather make $5 washing the dishes or $10 cleaning the gutters?” But much of the world is incentivized in non-monetary ways too. For example, using the “punishment” half of the definition above, I might threaten my son with losing Nintendo Switch access if he doesn’t wash the dishes. No money is involved, but I’m still incentivizing behavior.
And there are plenty of incentives beyond our direct control\! My son is *also* incentivized to not wash dishes because it’s boring, or because he has some friends over that he wants to hang out with, or dozens of other things. Ultimately, the conflicting array of different incentive structures placed on him will ultimately determine what actions he chooses to take.
## Why incentives matter
A phrase I see often in discussions—whether they are political, parenting, economic, or business—is “if they could **just** do…” Each time I see that phrase, I cringe a bit internally. Usually, the underlying assumption of the statement is “if people would behave contrary to their incentivized behavior then things would be better.” For example:
* If my kids would just go to bed when I tell them, they wouldn’t be so cranky in the morning.
* If people would just use the recycling bin, we wouldn’t have such a landfill problem.
* If people would just stop being lazy, our team would deliver our project on time.
In all these cases, the speakers are seemingly flummoxed as to why the people in question don’t behave more rationally. The problem is: each group is behaving perfectly rationally.
* The kids have a high time preference, and care more about the joy of staying up now than the crankiness in the morning. Plus, they don’t really suffer the consequences of morning crankiness, their parents do.
* No individual suffers much from their individual contribution to a landfill. If they stopped growing the size of the landfill, it would make an insignificant difference versus the amount of effort they need to engage in to properly recycle.
* If a team doesn’t properly account for the productivity of individuals on a project, each individual receives less harm from their own inaction. Sure, the project may be delayed, company revenue may be down, and they may even risk losing their job when the company goes out of business. But their laziness individually won’t determine the entirety of that outcome. By contrast, they greatly benefit from being lazy by getting to relax at work, go on social media, read a book, or do whatever else they do when they’re supposed to be working.
![Free Candy\!](https://www.snoyman.com/img/incentives/free-candy.png)
My point here is that, as long as you ignore the reality of how incentives drive human behavior, you’ll fail at getting the outcomes you want.
If everything I wrote up until now made perfect sense, you understand the premise of this blog post. The rest of it will focus on a bunch of real-world examples to hammer home the point, and demonstrate how versatile this mental model is.
## Running a company
Let’s say I run my own company, with myself as the only employee. My personal revenue will be 100% determined by my own actions. If I decide to take Tuesday afternoon off and go fishing, I’ve chosen to lose that afternoon’s revenue. Implicitly, I’ve decided that the enjoyment I get from an afternoon of fishing is greater than the potential revenue. You may think I’m being lazy, but it’s my decision to make. In this situation, the incentive–money–is perfectly aligned with my actions.
Compare this to a typical company/employee relationship. I might have a bank of Paid Time Off (PTO) days, in which case once again my incentives are relatively aligned. I know that I can take off 15 days throughout the year, and I’ve chosen to use half a day for the fishing trip. All is still good.
What about unlimited time off? Suddenly incentives are starting to misalign. I don’t directly pay a price for not showing up to work on Tuesday. Or Wednesday as well, for that matter. I might ultimately be fired for not doing my job, but that will take longer to work its way through the system than simply not making any money for the day taken off.
Compensation overall falls into this misaligned incentive structure. Let’s forget about taking time off. Instead, I work full time on a software project I’m assigned. But instead of using the normal toolchain we’re all used to at work, I play around with a new programming language. I get the fun and joy of playing with new technology, and potentially get to pad my resume a bit when I’m ready to look for a new job. But my current company gets slower results, less productivity, and is forced to subsidize my extracurricular learning.
When a CEO has a bonus structure based on profitability, he’ll do everything he can to make the company profitable. This might include things that actually benefit the company, like improving product quality, reducing internal red tape, or finding cheaper vendors. But it might also include destructive practices, like slashing the R\&D budget to show massive profits this year, in exchange for a catastrophe next year when the next version of the product fails to ship.
![Golden Parachute CEO](https://www.snoyman.com/img/incentives/golden-ceo.png)
Or my favorite example. My parents owned a business when I was growing up. They had a back office where they ran operations like accounting. All of the furniture was old couches from our house. After all, any money they spent on furniture came right out of their paychecks\! But in a large corporate environment, each department is generally given a budget for office furniture, a budget which doesn’t roll over year-to-year. The result? Executives make sure to spend the entire budget each year, often buying furniture far more expensive than they would choose if it was their own money.
There are plenty of details you can quibble with above. It’s in a company’s best interest to give people downtime so that they can come back recharged. Having good ergonomic furniture can in fact increase productivity in excess of the money spent on it. But overall, the picture is pretty clear: in large corporate structures, you’re guaranteed to have mismatches between the company’s goals and the incentive structure placed on individuals.
Using our model from above, we can lament how lazy, greedy, and unethical the employees are for doing what they’re incentivized to do instead of what’s right. But that’s simply ignoring the reality of human nature.
# Moral hazard
Moral hazard is a situation where one party is incentivized to take on more risk because another party will bear the consequences. Suppose I tell my son when he turns 21 (or whatever legal gambling age is) that I’ll cover all his losses for a day at the casino, but he gets to keep all the winnings.
What do you think he’s going to do? The most logical course of action is to place the largest possible bets for as long as possible, asking me to cover each time he loses, and taking money off the table and into his bank account each time he wins.
![Heads I win, tails you lose](https://www.snoyman.com/img/incentives/headstails.png)
But let’s look at a slightly more nuanced example. I go to a bathroom in the mall. As I’m leaving, I wash my hands. It will take me an extra 1 second to turn off the water when I’m done washing. That’s a trivial price to pay. If I *don’t* turn off the water, the mall will have to pay for many liters of wasted water, benefiting no one. But I won’t suffer any consequences at all.
This is also a moral hazard, but most people will still turn off the water. Why? Usually due to some combination of other reasons such as:
1. We’re so habituated to turning off the water that we don’t even consider *not* turning it off. Put differently, the mental effort needed to not turn off the water is more expensive than the 1 second of time to turn it off.
2. Many of us have been brought up with a deep guilt about wasting resources like water. We have an internal incentive structure that makes the 1 second to turn off the water much less costly than the mental anguish of the waste we created.
3. We’re afraid we’ll be caught by someone else and face some kind of social repercussions. (Or maybe more than social. Are you sure there isn’t a law against leaving the water tap on?)
Even with all that in place, you may notice that many public bathrooms use automatic water dispensers. Sure, there’s a sanitation reason for that, but it’s also to avoid this moral hazard.
A common denominator in both of these is that the person taking the action that causes the liability (either the gambling or leaving the water on) is not the person who bears the responsibility for that liability (the father or the mall owner). Generally speaking, the closer together the person making the decision and the person incurring the liability are, the smaller the moral hazard.
It’s easy to demonstrate that by extending the casino example a bit. I said it was the father who was covering the losses of the gambler. Many children (though not all) would want to avoid totally bankrupting their parents, or at least financially hurting them. Instead, imagine that someone from the IRS shows up at your door, hands you a credit card, and tells you you can use it at a casino all day, taking home all the chips you want. The money is coming from the government. How many people would put any restriction on how much they spend?
And since we’re talking about the government already…
## Government moral hazards
As I was preparing to write this blog post, the California wildfires hit. The discussions around those wildfires gave a *huge* number of examples of moral hazards. I decided to cherry-pick a few for this post.
The first and most obvious one: California is asking for disaster relief funds from the federal government. That sounds wonderful. These fires were a natural disaster, so why shouldn’t the federal government pitch in and help take care of people?
The problem is, once again, a moral hazard. In the case of the wildfires, California and Los Angeles both had ample actions they could have taken to mitigate the destruction of this fire: better forest management, larger fire department, keeping the water reservoirs filled, and probably much more that hasn’t come to light yet.
If the federal government bails out California, it will be a clear message for the future: your mistakes will be fixed by others. You know what kind of behavior that incentivizes? More risky behavior\! Why spend state funds on forest management and extra firefighters—activities that don’t win politicians a lot of votes in general—when you could instead spend it on a football stadium, higher unemployment payments, or anything else, and then let the feds cover the cost of screw-ups.
You may notice that this is virtually identical to the 2008 “too big to fail” bail-outs. Wall Street took insanely risky behavior, reaped huge profits for years, and when they eventually got caught with their pants down, the rest of us bailed them out. “Privatizing profits, socializing losses.”
![Too big to fail](https://www.snoyman.com/img/incentives/toobig.png)
And here’s the absolute best part of this: I can’t even truly blame either California *or* Wall Street. (I mean, I *do* blame them, I think their behavior is reprehensible, but you’ll see what I mean.) In a world where the rules of the game implicitly include the bail-out mentality, you would be harming your citizens/shareholders/investors if you didn’t engage in that risky behavior. Since everyone is on the hook for those socialized losses, your best bet is to maximize those privatized profits.
There’s a lot more to government and moral hazard, but I think these two cases demonstrate the crux pretty solidly. But let’s leave moral hazard behind for a bit and get to general incentivization discussions.
# Non-monetary competition
At least 50% of the economics knowledge I have comes from the very first econ course I took in college. That professor was amazing, and had some very colorful stories. I can’t vouch for the veracity of the two I’m about to share, but they definitely drive the point home.
In the 1970s, the US had an oil shortage. To “fix” this problem, they instituted price caps on gasoline, which of course resulted in insufficient gasoline. To “fix” this problem, they instituted policies where, depending on your license plate number, you could only fill up gas on certain days of the week. (Irrelevant detail for our point here, but this just resulted in people filling up their tanks more often, no reduction in gas usage.)
Anyway, my professor’s wife had a friend. My professor described in *great* detail how attractive this woman was. I’ll skip those details here since this is a PG-rated blog. In any event, she never had any trouble filling up her gas tank any day of the week. She would drive up, be told she couldn’t fill up gas today, bat her eyes at the attendant, explain how helpless she was, and was always allowed to fill up gas.
This is a demonstration of *non-monetary compensation*. Most of the time in a free market, capitalist economy, people are compensated through money. When price caps come into play, there’s a limit to how much monetary compensation someone can receive. And in that case, people find other ways of competing. Like this woman’s case: through using flirtatious behavior to compensate the gas station workers to let her cheat the rules.
The other example was much more insidious. Santa Monica had a problem: it was predominantly wealthy and white. They wanted to fix this problem, and decided to put in place rent controls. After some time, they discovered that Santa Monica had become *wealthier and whiter*, the exact opposite of their desired outcome. Why would that happen?
Someone investigated, and ended up interviewing a landlady that demonstrated the reason. She was an older white woman, and admittedly racist. Prior to the rent controls, she would list her apartments in the newspaper, and would be legally obligated to rent to anyone who could afford it. Once rent controls were in place, she took a different tact. She knew that she would only get a certain amount for the apartment, and that the demand for apartments was higher than the supply. That meant she could be picky.
She ended up finding tenants through friends-of-friends. Since it wasn’t an official advertisement, she wasn’t legally required to rent it out if someone could afford to pay. Instead, she got to interview people individually and then make them an offer. Normally, that would have resulted in receiving a lower rental price, but not under rent controls.
So who did she choose? A young, unmarried, wealthy, white woman. It made perfect sense. Women were less intimidating and more likely to maintain the apartment better. Wealthy people, she determined, would be better tenants. (I have no idea if this is true in practice or not, I’m not a landlord myself.) Unmarried, because no kids running around meant less damage to the property. And, of course, white. Because she was racist, and her incentive structure made her prefer whites.
You can deride her for being racist, I won’t disagree with you. But it’s simply the reality. Under the non-rent-control scenario, her profit motive for money outweighed her racism motive. But under rent control, the monetary competition was removed, and she was free to play into her racist tendencies without facing any negative consequences.
## Bureaucracy
These were the two examples I remember for that course. But non-monetary compensation pops up in many more places. One highly pertinent example is bureaucracies. Imagine you have a government office, or a large corporation’s acquisition department, or the team that apportions grants at a university. In all these cases, you have a group of people making decisions about handing out money that has no monetary impact on them. If they give to the best qualified recipients, they receive no raises. If they spend the money recklessly on frivolous projects, they face no consequences.
Under such an incentivization scheme, there’s little to encourage the bureaucrats to make intelligent funding decisions. Instead, they’ll be incentivized to spend the money where they recognize non-monetary benefits. This is why it’s so common to hear about expensive meals, gift bags at conferences, and even more inappropriate ways of trying to curry favor with those that hold the purse strings.
Compare that ever so briefly with the purchases made by a small mom-and-pop store like my parents owned. Could my dad take a bribe to buy from a vendor who’s ripping him off? Absolutely he could\! But he’d lose more on the deal than he’d make on the bribe, since he’s directly incentivized by the deal itself. It would make much more sense for him to go with the better vendor, save $5,000 on the deal, and then treat himself to a lavish $400 meal to celebrate.
# Government incentivized behavior
This post is getting longer in the tooth than I’d intended, so I’ll finish off with this section and make it a bit briefer. Beyond all the methods mentioned above, government has another mechanism for modifying behavior: through directly changing incentives via legislation, regulation, and monetary policy. Let’s see some examples:
* Artificial modification of interest rates encourages people to take on more debt than they would in a free capital market, leading to [malinvestment](https://en.wikipedia.org/wiki/Malinvestment) and a consumer debt crisis, and causing the boom-bust cycle we all painfully experience.
* Going along with that, giving tax breaks on interest payments further artificially incentivizes people to take on debt that they wouldn’t otherwise.
* During COVID-19, at some points unemployment benefits were greater than minimum wage, incentivizing people to rather stay home and not work than get a job, leading to reduced overall productivity in the economy and more printed dollars for benefits. In other words, it was a perfect recipe for inflation.
* The tax code gives deductions to “help” people. That might be true, but the real impact is incentivizing people to make decisions they wouldn’t have otherwise. For example, giving out tax deductions on children encourages having more kids. Tax deductions on childcare and preschools incentivizes dual-income households. Whether or not you like the outcomes, it’s clear that it’s government that’s encouraging these outcomes to happen.
* Tax incentives cause people to engage in behavior they wouldn’t otherwise (daycare+working mother, for example).
* Inflation means that the value of your money goes down over time, which encourages people to spend more today, when their money has a larger impact. (Milton Friedman described this as [high living](https://www.youtube.com/watch?v=ZwNDd2_beTU).)
# Conclusion
The idea here is simple, and fully encapsulated in the title: incentives determine outcomes. If you want to know how to get a certain outcome from others, incentivize them to want that to happen. If you want to understand why people act in seemingly irrational ways, check their incentives. If you’re confused why leaders (and especially politicians) seem to engage in destructive behavior, check their incentives.
We can bemoan these realities all we want, but they *are* realities. While there are some people who have a solid internal moral and ethical code, and that internal code incentivizes them to behave against their externally-incentivized interests, those people are rare. And frankly, those people are self-defeating. People *should* take advantage of the incentives around them. Because if they don’t, someone else will.
(If you want a literary example of that last comment, see the horse in Animal Farm.)
How do we improve the world under these conditions? Make sure the incentives align well with the overall goals of society. To me, it’s a simple formula:
* Focus on free trade, value for value, as the basis of a society. In that system, people are always incentivized to provide value to other people.
* Reduce the size of bureaucracies and large groups of all kinds. The larger an organization becomes, the farther the consequences of decisions are from those who make them.
* And since the nature of human beings will be to try and create areas where they can control the incentive systems to their own benefits, make that as difficult as possible. That comes in the form of strict limits on government power, for example.
And even if you don’t want to buy in to this conclusion, I hope the rest of the content was educational, and maybe a bit entertaining\!
-
I’ve been using Notedeck for several months, starting with its extremely early and experimental alpha versions, all the way to its current, more stable alpha releases. The journey has been fascinating, as I’ve had the privilege of watching it evolve from a concept into a functional and promising tool.
In its earliest stages, Notedeck was raw—offering glimpses of its potential but still far from practical for daily use. Even then, the vision behind it was clear: a platform designed to redefine how we interact with Nostr by offering flexibility and power for all users.
I'm very bullish on Notedeck. Why? Because Will Casarin is making it! Duh! 😂
Seriously though, if we’re reimagining the web and rebuilding portions of the Internet, it’s important to recognize [the potential of Notedeck](https://damus.io/notedeck/). If Nostr is reimagining the web, then Notedeck is reimagining the Nostr client.
Notedeck isn’t just another Nostr app—it’s more a Nostr browser that functions more like an operating system with micro-apps. How cool is that?
Much like how Google's Chrome evolved from being a web browser with a task manager into ChromeOS, a full blown operating system, Notedeck aims to transform how we interact with the Nostr. It goes beyond individual apps, offering a foundation for a fully integrated ecosystem built around Nostr.
As a Nostr evangelist, I love to scream **INTEROPERABILITY** and tout every application's integrations. Well, Notedeck has the potential to be one of the best platforms to showcase these integrations in entirely new and exciting ways.
Do you want an Olas feed of images? Add the media column.
Do you want a feed of live video events? Add the zap.stream column.
Do you want Nostr Nests or audio chats? Add that column to your Notedeck.
Git? Email? Books? Chat and DMs? It's all possible.
Not everyone wants a super app though, and that’s okay. As with most things in the Nostr ecosystem, flexibility is key. Notedeck gives users the freedom to choose how they engage with it—whether it’s simply following hashtags or managing straightforward feeds. You'll be able to tailor Notedeck to fit your needs, using it as extensively or minimally as you prefer.
Notedeck is designed with a local-first approach, utilizing Nostr content stored directly on your device via the local nostrdb. This will enable a plethora of advanced tools such as search and filtering, the creation of custom feeds, and the ability to develop personalized algorithms across multiple Notedeck micro-applications—all with unparalleled flexibility.
Notedeck also supports multicast. Let's geek out for a second. Multicast is a method of communication where data is sent from one source to multiple destinations simultaneously, but only to devices that wish to receive the data. Unlike broadcast, which sends data to all devices on a network, multicast targets specific receivers, reducing network traffic. This is commonly used for efficient data distribution in scenarios like streaming, conferencing, or large-scale data synchronization between devices.
> In a local first world where each device holds local copies of your nostr nodes, and each device transparently syncs with each other on the local network, each node becomes a backup. Your data becomes antifragile automatically. When a node goes down it can resync and recover from other nodes. Even if not all nodes have a complete collection, negentropy can pull down only what is needed from each device. All this can be done without internet.
>
> \-Will Casarin
In the context of Notedeck, multicast would allow multiple devices to sync their Nostr nodes with each other over a local network without needing an internet connection. Wild.
Notedeck aims to offer full customization too, including the ability to design and share custom skins, much like Winamp. Users will also be able to create personalized columns and, in the future, share their setups with others. This opens the door for power users to craft tailored Nostr experiences, leveraging their expertise in the protocol and applications. By sharing these configurations as "Starter Decks," they can simplify onboarding and showcase the best of Nostr’s ecosystem.
Nostr’s “Other Stuff” can often be difficult to discover, use, or understand. Many users doesn't understand or know how to use web browser extensions to login to applications. Let's not even get started with nsecbunkers. Notedeck will address this challenge by providing a native experience that brings these lesser-known applications, tools, and content into a user-friendly and accessible interface, making exploration seamless. However, that doesn't mean Notedeck should disregard power users that want to use nsecbunkers though - hint hint.
For anyone interested in watching Nostr be [developed live](https://github.com/damus-io/notedeck), right before your very eyes, Notedeck’s progress serves as a reminder of what’s possible when innovation meets dedication. The current alpha is already demonstrating its ability to handle complex use cases, and I’m excited to see how it continues to grow as it moves toward a full release later this year.
-
This article hopes to complement the article by Lyn Alden on YouTube: https://www.youtube.com/watch?v=jk_HWmmwiAs
## The reason why we have broken money
Before the invention of key technologies such as the printing press and electronic communications, even such as those as early as morse code transmitters, gold had won the competition for best medium of money around the world.
In fact, it was not just gold by itself that became money, rulers and world leaders developed coins in order to help the economy grow. Gold nuggets were not as easy to transact with as coins with specific imprints and denominated sizes.
However, these modern technologies created massive efficiencies that allowed us to communicate and perform services more efficiently and much faster, yet the medium of money could not benefit from these advancements. Gold was heavy, slow and expensive to move globally, even though requesting and performing services globally did not have this limitation anymore.
Banks took initiative and created derivatives of gold: paper and electronic money; these new currencies allowed the economy to continue to grow and evolve, but it was not without its dark side. Today, no currency is denominated in gold at all, money is backed by nothing and its inherent value, the paper it is printed on, is worthless too.
Banks and governments eventually transitioned from a money derivative to a system of debt that could be co-opted and controlled for political and personal reasons. Our money today is broken and is the cause of more expensive, poorer quality goods in the economy, a larger and ever growing wealth gap, and many of the follow-on problems that have come with it.
## Bitcoin overcomes the "transfer of hard money" problem
Just like gold coins were created by man, Bitcoin too is a technology created by man. Bitcoin, however is a much more profound invention, possibly more of a discovery than an invention in fact. Bitcoin has proven to be unbreakable, incorruptible and has upheld its ability to keep its units scarce, inalienable and counterfeit proof through the nature of its own design.
Since Bitcoin is a digital technology, it can be transferred across international borders almost as quickly as information itself. It therefore severely reduces the need for a derivative to be used to represent money to facilitate digital trade. This means that as the currency we use today continues to fare poorly for many people, bitcoin will continue to stand out as hard money, that just so happens to work as well, functionally, along side it.
Bitcoin will also always be available to anyone who wishes to earn it directly; even China is unable to restrict its citizens from accessing it. The dollar has traditionally become the currency for people who discover that their local currency is unsustainable. Even when the dollar has become illegal to use, it is simply used privately and unofficially. However, because bitcoin does not require you to trade it at a bank in order to use it across borders and across the web, Bitcoin will continue to be a viable escape hatch until we one day hit some critical mass where the world has simply adopted Bitcoin globally and everyone else must adopt it to survive.
Bitcoin has not yet proven that it can support the world at scale. However it can only be tested through real adoption, and just as gold coins were developed to help gold scale, tools will be developed to help overcome problems as they arise; ideally without the need for another derivative, but if necessary, hopefully with one that is more neutral and less corruptible than the derivatives used to represent gold.
## Bitcoin blurs the line between commodity and technology
Bitcoin is a technology, it is a tool that requires human involvement to function, however it surprisingly does not allow for any concentration of power. Anyone can help to facilitate Bitcoin's operations, but no one can take control of its behaviour, its reach, or its prioritisation, as it operates autonomously based on a pre-determined, neutral set of rules.
At the same time, its built-in incentive mechanism ensures that people do not have to operate bitcoin out of the good of their heart. Even though the system cannot be co-opted holistically, It will not stop operating while there are people motivated to trade their time and resources to keep it running and earn from others' transaction fees. Although it requires humans to operate it, it remains both neutral and sustainable.
Never before have we developed or discovered a technology that could not be co-opted and used by one person or faction against another. Due to this nature, Bitcoin's units are often described as a commodity; they cannot be usurped or virtually cloned, and they cannot be affected by political biases.
## The dangers of derivatives
A derivative is something created, designed or developed to represent another thing in order to solve a particular complication or problem. For example, paper and electronic money was once a derivative of gold.
In the case of Bitcoin, if you cannot link your units of bitcoin to an "address" that you personally hold a cryptographically secure key to, then you very likely have a derivative of bitcoin, not bitcoin itself. If you buy bitcoin on an online exchange and do not withdraw the bitcoin to a wallet that you control, then you legally own an electronic derivative of bitcoin.
Bitcoin is a new technology. It will have a learning curve and it will take time for humanity to learn how to comprehend, authenticate and take control of bitcoin collectively. Having said that, many people all over the world are already using and relying on Bitcoin natively. For many, it will require for people to find the need or a desire for a neutral money like bitcoin, and to have been burned by derivatives of it, before they start to understand the difference between the two. Eventually, it will become an essential part of what we regard as common sense.
## Learn for yourself
If you wish to learn more about how to handle bitcoin and avoid derivatives, you can start by searching online for tutorials about "Bitcoin self custody".
There are many options available, some more practical for you, and some more practical for others. Don't spend too much time trying to find the perfect solution; practice and learn. You may make mistakes along the way, so be careful not to experiment with large amounts of your bitcoin as you explore new ideas and technologies along the way. This is similar to learning anything, like riding a bicycle; you are sure to fall a few times, scuff the frame, so don't buy a high performance racing bike while you're still learning to balance.
-
yoyoaa
-
收录的内容中 kind=1的部分,实话说 质量不高。
所以我增加了kind=30023 长文的article,但是更新的太少,多个relays 的服务器也没有多少长文。
所有搜索nostr如果需要产生价值,需要有高质量的文章和新闻。
而且现在有很多机器人的文章充满着浪费空间的作用,其他作用都用不上。
https://www.duozhutuan.com 目前放的是给搜索引擎提供搜索的原材料。没有做UI给人类浏览。所以看上去是粗糙的。
我并没有打算去做一个发microblog的 web客户端,那类的客户端太多了。
我觉得nostr社区需要解决的还是应用。如果仅仅是microblog 感觉有点够呛
幸运的是npub.pro 建站这样的,我觉得有点意思。
yakihonne 智能widget 也有意思
我做的TaskQ5 我自己在用了。分布式的任务系统,也挺好的。
-
## **Necessário**
- Um Android que você não use mais (a câmera deve estar funcionando).
- Um cartão microSD (opcional, usado apenas uma vez).
- Um dispositivo para acompanhar seus fundos (provavelmente você já tem um).
## **Algumas coisas que você precisa saber**
- O dispositivo servirá como um assinador. Qualquer movimentação só será efetuada após ser assinada por ele.
- O cartão microSD será usado para transferir o APK do Electrum e garantir que o aparelho não terá contato com outras fontes de dados externas após sua formatação. Contudo, é possível usar um cabo USB para o mesmo propósito.
- A ideia é deixar sua chave privada em um dispositivo offline, que ficará desligado em 99% do tempo. Você poderá acompanhar seus fundos em outro dispositivo conectado à internet, como seu celular ou computador pessoal.
---
## **O tutorial será dividido em dois módulos:**
- Módulo 1 - Criando uma carteira fria/assinador.
- Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.
---
## **No final, teremos:**
- Uma carteira fria que também servirá como assinador.
- Um dispositivo para acompanhar os fundos da carteira.
![Conteúdo final](https://i.imgur.com/7ktryvP.png)
---
## **Módulo 1 - Criando uma carteira fria/assinador**
1. Baixe o APK do Electrum na aba de **downloads** em <https://electrum.org/>. Fique à vontade para [verificar as assinaturas](https://electrum.readthedocs.io/en/latest/gpg-check.html) do software, garantindo sua autenticidade.
2. Formate o cartão microSD e coloque o APK do Electrum nele. Caso não tenha um cartão microSD, pule este passo.
![Formatação](https://i.imgur.com/n5LN67e.png)
3. Retire os chips e acessórios do aparelho que será usado como assinador, formate-o e aguarde a inicialização.
![Formatação](https://i.imgur.com/yalfte6.png)
4. Durante a inicialização, pule a etapa de conexão ao Wi-Fi e rejeite todas as solicitações de conexão. Após isso, você pode desinstalar aplicativos desnecessários, pois precisará apenas do Electrum. Certifique-se de que Wi-Fi, Bluetooth e dados móveis estejam desligados. Você também pode ativar o **modo avião**.\
*(Curiosidade: algumas pessoas optam por abrir o aparelho e danificar a antena do Wi-Fi/Bluetooth, impossibilitando essas funcionalidades.)*
![Modo avião](https://i.imgur.com/mQw0atg.png)
5. Insira o cartão microSD com o APK do Electrum no dispositivo e instale-o. Será necessário permitir instalações de fontes não oficiais.
![Instalação](https://i.imgur.com/brZHnYr.png)
6. No Electrum, crie uma carteira padrão e gere suas palavras-chave (seed). Anote-as em um local seguro. Caso algo aconteça com seu assinador, essas palavras permitirão o acesso aos seus fundos novamente. *(Aqui entra seu método pessoal de backup.)*
![Palavras-chave](https://i.imgur.com/hS4YQ8d.png)
---
## **Módulo 2 - Configurando um dispositivo para visualizar seus fundos e assinando transações com o assinador.**
1. Criar uma carteira **somente leitura** em outro dispositivo, como seu celular ou computador pessoal, é uma etapa bastante simples. Para este tutorial, usaremos outro smartphone Android com Electrum. Instale o Electrum a partir da aba de downloads em <https://electrum.org/> ou da própria Play Store. *(ATENÇÃO: O Electrum não existe oficialmente para iPhone. Desconfie se encontrar algum.)*
2. Após instalar o Electrum, crie uma carteira padrão, mas desta vez escolha a opção **Usar uma chave mestra**.
![Chave mestra](https://i.imgur.com/x5WpHpn.png)
3. Agora, no assinador que criamos no primeiro módulo, exporte sua chave pública: vá em **Carteira > Detalhes da carteira > Compartilhar chave mestra pública**.
![Exportação](https://i.imgur.com/YrYlL2p.png)
4. Escaneie o QR gerado da chave pública com o dispositivo de consulta. Assim, ele poderá acompanhar seus fundos, mas sem permissão para movimentá-los.
5. Para receber fundos, envie Bitcoin para um dos endereços gerados pela sua carteira: **Carteira > Addresses/Coins**.
6. Para movimentar fundos, crie uma transação no dispositivo de consulta. Como ele não possui a chave privada, será necessário assiná-la com o dispositivo assinador.
![Transação não assinada](https://i.imgur.com/MxhQZZx.jpeg)
7. No assinador, escaneie a transação não assinada, confirme os detalhes, assine e compartilhe. Será gerado outro QR, desta vez com a transação já assinada.
![Assinando](https://i.imgur.com/vNGtvGC.png)
8. No dispositivo de consulta, escaneie o QR da transação assinada e transmita-a para a rede.
---
## **Conclusão**
**Pontos positivos do setup:**
- **Simplicidade:** Basta um dispositivo Android antigo.
- **Flexibilidade:** Funciona como uma ótima carteira fria, ideal para holders.
**Pontos negativos do setup:**
- **Padronização:** Não utiliza seeds no padrão BIP-39, você sempre precisará usar o electrum.
- **Interface:** A aparência do Electrum pode parecer antiquada para alguns usuários.
Nesse ponto, temos uma carteira fria que também serve para assinar transações. O fluxo de assinar uma transação se torna: ***Gerar uma transação não assinada > Escanear o QR da transação não assinada > Conferir e assinar essa transação com o assinador > Gerar QR da transação assinada > Escanear a transação assinada com qualquer outro dispositivo que possa transmiti-la para a rede.***
Como alguns devem saber, uma transação assinada de Bitcoin é praticamente impossível de ser fraudada. Em um cenário catastrófico, você pode mesmo que sem internet, repassar essa transação assinada para alguém que tenha acesso à rede por qualquer meio de comunicação. Mesmo que não queiramos que isso aconteça um dia, esse setup acaba por tornar essa prática possível.
---
-
*Quick context: I wanted to check out Nostr's longform posts and this blog post seemed like a good one to try and mirror. It's originally from my [free to read/share attempt to write a novel](https://untitlednovel.dns7.top/contents/), but this post here is completely standalone - just describing how I used AI image generation to make a small piece of the work.*
Hold on, put your pitchforks down - outside of using Grammerly & Emacs for grammatical corrections - not a single character was generated or modified by computers; a non-insignificant portion of my first draft originating on pen & paper. No AI is ~~weird and crazy~~ imaginative enough to write like I do. The only successful AI contribution you'll find is a single image, the map, which I heavily edited. This post will go over how I generated and modified an image using AI, which I believe brought some value to the work, and cover a few quick thoughts about AI towards the end.
Let's be clear, I can't draw, but I wanted a map which I believed would improve the story I was working on. After getting abysmal results by prompting AI with text only I decided to use "Diffuse the Rest," a Stable Diffusion tool that allows you to provide a reference image + description to fine tune what you're looking for. I gave it this Microsoft Paint looking drawing:
![](https://untitlednovel.dns7.top/img/mapgen/01.avif)
and after a number of outputs, selected this one to work on:
![](https://untitlednovel.dns7.top/img/mapgen/02.avif)
The image is way better than the one I provided, but had I used it as is, I still feel it would have decreased the quality of my work instead of increasing it. After firing up Gimp I cropped out the top and bottom, expanded the ocean and separated the landmasses, then copied the top right corner of the large landmass to replace the bottom left that got cut off. Now we've got something that looks like concept art: not horrible, and gets the basic idea across, but it's still due for a lot more detail.
![](https://untitlednovel.dns7.top/img/mapgen/03.avif)
The next thing I did was add some texture to make it look more map like. I duplicated the layer in Gimp and applied the "Cartoon" filter to both for some texture. The top layer had a much lower effect strength to give it a more textured look, while the lower layer had a higher effect strength that looked a lot like mountains or other terrain features. Creating a layer mask allowed me to brush over spots to display the lower layer in certain areas, giving it some much needed features.
![](https://untitlednovel.dns7.top/img/mapgen/04.avif)
At this point I'd made it to where I felt it may improve the work instead of detracting from it - at least after labels and borders were added, but the colors seemed artificial and out of place. Luckily, however, this is when PhotoFunia could step in and apply a sketch effect to the image.
![](https://untitlednovel.dns7.top/img/mapgen/05.avif)
At this point I was pretty happy with how it was looking, it was close to what I envisioned and looked very visually appealing while still being a good way to portray information. All that was left was to make the white background transparent, add some minor details, and add the labels and borders. Below is the exact image I wound up using:
![](https://untitlednovel.dns7.top/img/map.avif)
Overall, I'm very satisfied with how it turned out, and if you're working on a creative project, I'd recommend attempting something like this. It's not a central part of the work, but it improved the chapter a fair bit, and was doable despite lacking the talent and not intending to allocate a budget to my making of a free to read and share story.
#### The AI Generated Elephant in the Room
If you've read my non-fiction writing before, you'll know that I think AI will find its place around the skill floor as opposed to the skill ceiling. As you saw with my input, I have absolutely zero drawing talent, but with some elbow grease and an existing creative direction before and after generating an image I was able to get something well above what I could have otherwise accomplished. Outside of the lowest common denominators like stock photos for the sole purpose of a link preview being eye catching, however, I doubt AI will be wholesale replacing most creative works anytime soon. I can assure you that I tried numerous times to describe the map without providing a reference image, and if I used one of those outputs (or even just the unedited output after providing the reference image) it would have decreased the quality of my work instead of improving it.
I'm going to go out on a limb and expect that AI image, text, and video is all going to find its place in slop & generic content (such as AI generated slop replacing article spinners and stock photos respectively) and otherwise be used in a supporting role for various creative endeavors. For people working on projects like I'm working on (e.g. intended budget $0) it's helpful to have an AI capable of doing legwork - enabling projects to exist or be improved in ways they otherwise wouldn't have. I'm also guessing it'll find its way into more professional settings for grunt work - think a picture frame or fake TV show that would exist in the background of an animated project - likely a detail most people probably wouldn't notice, but that would save the creators time and money and/or allow them to focus more on the essential aspects of said work. Beyond that, as I've predicted before: I expect plenty of emails will be generated from a short list of bullet points, only to be summarized by the recipient's AI back into bullet points.
I will also make a prediction counter to what seems mainstream: AI is about to peak for a while. The start of AI image generation was with Google's DeepDream in 2015 - image recognition software that could be run in reverse to "recognize" patterns where there were none, effectively generating an image from digital noise or an unrelated image. While I'm not an expert by any means, I don't think we're too far off from that a decade later, just using very fine tuned tools that develop more coherent images. I guess that we're close to maxing out how efficiently we're able to generate images and video in that manner, and the hard caps on how much creative direction we can have when using AI - as well as the limits to how long we can keep it coherent (e.g. long videos or a chronologically consistent set of images) - will prevent AI from progressing too far beyond what it is currently unless/until another breakthrough occurs.
-
## The Rise of Graph RAGs and the Quest for Data Quality
As we enter a new year, it’s impossible to ignore the boom of retrieval-augmented generation (RAG) systems, particularly those leveraging graph-based approaches. The previous year saw a surge in advancements and discussions about Graph RAGs, driven by their potential to enhance large language models (LLMs), reduce hallucinations, and deliver more reliable outputs. Let’s dive into the trends, challenges, and strategies for making the most of Graph RAGs in artificial intelligence.
## Booming Interest in Graph RAGs
Graph RAGs have dominated the conversation in AI circles. With new research papers and innovations emerging weekly, it’s clear that this approach is reshaping the landscape. These systems, especially those developed by tech giants like Microsoft, demonstrate how graphs can:
* **Enhance LLM Outputs:** By grounding responses in structured knowledge, graphs significantly reduce hallucinations.
* **Support Complex Queries:** Graphs excel at managing linked and connected data, making them ideal for intricate problem-solving.
Conferences on linked and connected data have increasingly focused on Graph RAGs, underscoring their central role in modern AI systems. However, the excitement around this technology has brought critical questions to the forefront: How do we ensure the quality of the graphs we’re building, and are they genuinely aligned with our needs?
## Data Quality: The Foundation of Effective Graphs
A high-quality graph is the backbone of any successful RAG system. Constructing these graphs from unstructured data requires attention to detail and rigorous processes. Here’s why:
* **Richness of Entities:** Effective retrieval depends on graphs populated with rich, detailed entities.
* **Freedom from Hallucinations:** Poorly constructed graphs amplify inaccuracies rather than mitigating them.
Without robust data quality, even the most sophisticated Graph RAGs become ineffective. As a result, the focus must shift to refining the graph construction process. Improving data strategy and ensuring meticulous data preparation is essential to unlock the full potential of Graph RAGs.
## Hybrid Graph RAGs and Variations
While standard Graph RAGs are already transformative, hybrid models offer additional flexibility and power. Hybrid RAGs combine structured graph data with other retrieval mechanisms, creating systems that:
* Handle diverse data sources with ease.
* Offer improved adaptability to complex queries.
Exploring these variations can open new avenues for AI systems, particularly in domains requiring structured and unstructured data processing.
## Ontology: The Key to Graph Construction Quality
Ontology — defining how concepts relate within a knowledge domain — is critical for building effective graphs. While this might sound abstract, it’s a well-established field blending philosophy, engineering, and art. Ontology engineering provides the framework for:
* **Defining Relationships:** Clarifying how concepts connect within a domain.
* **Validating Graph Structures:** Ensuring constructed graphs are logically sound and align with domain-specific realities.
Traditionally, ontologists — experts in this discipline — have been integral to large enterprises and research teams. However, not every team has access to dedicated ontologists, leading to a significant challenge: How can teams without such expertise ensure the quality of their graphs?
## How to Build Ontology Expertise in a Startup Team
For startups and smaller teams, developing ontology expertise may seem daunting, but it is achievable with the right approach:
1. **Assign a Knowledge Champion:** Identify a team member with a strong analytical mindset and give them time and resources to learn ontology engineering.
2. **Provide Training:** Invest in courses, workshops, or certifications in knowledge graph and ontology creation.
3. **Leverage Partnerships:** Collaborate with academic institutions, domain experts, or consultants to build initial frameworks.
4. **Utilize Tools:** Introduce ontology development tools like Protégé, OWL, or SHACL to simplify the creation and validation process.
5. **Iterate with Feedback:** Continuously refine ontologies through collaboration with domain experts and iterative testing.
So, it is not always affordable for a startup to have a dedicated oncologist or knowledge engineer in a team, but you could involve consulters or build barefoot experts.
You could read about barefoot experts in my article :
Even startups can achieve robust and domain-specific ontology frameworks by fostering in-house expertise.
## How to Find or Create Ontologies
For teams venturing into Graph RAGs, several strategies can help address the ontology gap:
1. **Leverage Existing Ontologies:** Many industries and domains already have open ontologies. For instance:
* **Public Knowledge Graphs:** Resources like Wikipedia’s graph offer a wealth of structured knowledge.
* **Industry Standards:** Enterprises such as Siemens have invested in creating and sharing ontologies specific to their fields.
* **Business Framework Ontology (BFO):** A valuable resource for enterprises looking to define business processes and structures.
1. **Build In-House Expertise:** If budgets allow, consider hiring knowledge engineers or providing team members with the resources and time to develop expertise in ontology creation.
2. **Utilize LLMs for Ontology Construction:** Interestingly, LLMs themselves can act as a starting point for ontology development:
* **Prompt-Based Extraction:** LLMs can generate draft ontologies by leveraging their extensive training on graph data.
* **Domain Expert Refinement:** Combine LLM-generated structures with insights from domain experts to create tailored ontologies.
## Parallel Ontology and Graph Extraction
An emerging approach involves extracting ontologies and graphs in parallel. While this can streamline the process, it presents challenges such as:
* **Detecting Hallucinations:** Differentiating between genuine insights and AI-generated inaccuracies.
* **Ensuring Completeness:** Ensuring no critical concepts are overlooked during extraction.
Teams must carefully validate outputs to ensure reliability and accuracy when employing this parallel method.
## LLMs as Ontologists
While traditionally dependent on human expertise, ontology creation is increasingly supported by LLMs. These models, trained on vast amounts of data, possess inherent knowledge of many open ontologies and taxonomies. Teams can use LLMs to:
* **Generate Skeleton Ontologies:** Prompt LLMs with domain-specific information to draft initial ontology structures.
* **Validate and Refine Ontologies:** Collaborate with domain experts to refine these drafts, ensuring accuracy and relevance.
However, for validation and graph construction, formal tools such as OWL, SHACL, and RDF should be prioritized over LLMs to minimize hallucinations and ensure robust outcomes.
## Final Thoughts: Unlocking the Power of Graph RAGs
The rise of Graph RAGs underscores a simple but crucial correlation: improving graph construction and data quality directly enhances retrieval systems. To truly harness this power, teams must invest in understanding ontologies, building quality graphs, and leveraging both human expertise and advanced AI tools.
As we move forward, the interplay between Graph RAGs and ontology engineering will continue to shape the future of AI. Whether through adopting existing frameworks or exploring innovative uses of LLMs, the path to success lies in a deep commitment to data quality and domain understanding.
Have you explored these technologies in your work? Share your experiences and insights — and stay tuned for more discussions on ontology extraction and its role in AI advancements. Cheers to a year of innovation!
-
## The Four-Layer Framework
### Layer 1: Zoom Out
![](http://hedgedoc.malin.onl/uploads/bf583a95-79b0-4efe-a194-d6a8b80d6f8a.png)
Start by looking at the big picture. What’s the subject about, and why does it matter? Focus on the overarching ideas and how they fit together. Think of this as the 30,000-foot view—it’s about understanding the "why" and "how" before diving into the "what."
**Example**: If you’re learning programming, start by understanding that it’s about giving logical instructions to computers to solve problems.
- **Tip**: Keep it simple. Summarize the subject in one or two sentences and avoid getting bogged down in specifics at this stage.
_Once you have the big picture in mind, it’s time to start breaking it down._
---
### Layer 2: Categorize and Connect
![](http://hedgedoc.malin.onl/uploads/5c413063-fddd-48f9-a65b-2cd374340613.png)
Now it’s time to break the subject into categories—like creating branches on a tree. This helps your brain organize information logically and see connections between ideas.
**Example**: Studying biology? Group concepts into categories like cells, genetics, and ecosystems.
- **Tip**: Use headings or labels to group similar ideas. Jot these down in a list or simple diagram to keep track.
_With your categories in place, you’re ready to dive into the details that bring them to life._
---
### Layer 3: Master the Details
![](http://hedgedoc.malin.onl/uploads/55ad1e7e-a28a-42f2-8acb-1d3aaadca251.png)
Once you’ve mapped out the main categories, you’re ready to dive deeper. This is where you learn the nuts and bolts—like formulas, specific techniques, or key terminology. These details make the subject practical and actionable.
**Example**: In programming, this might mean learning the syntax for loops, conditionals, or functions in your chosen language.
- **Tip**: Focus on details that clarify the categories from Layer 2. Skip anything that doesn’t add to your understanding.
_Now that you’ve mastered the essentials, you can expand your knowledge to include extra material._
---
### Layer 4: Expand Your Horizons
![](http://hedgedoc.malin.onl/uploads/7ede6389-b429-454d-b68a-8bae607fc7d7.png)
Finally, move on to the extra material—less critical facts, trivia, or edge cases. While these aren’t essential to mastering the subject, they can be useful in specialized discussions or exams.
**Example**: Learn about rare programming quirks or historical trivia about a language’s development.
- **Tip**: Spend minimal time here unless it’s necessary for your goals. It’s okay to skim if you’re short on time.
---
## Pro Tips for Better Learning
### 1. Use Active Recall and Spaced Repetition
Test yourself without looking at notes. Review what you’ve learned at increasing intervals—like after a day, a week, and a month. This strengthens memory by forcing your brain to actively retrieve information.
### 2. Map It Out
Create visual aids like [diagrams or concept maps](https://excalidraw.com/) to clarify relationships between ideas. These are particularly helpful for organizing categories in Layer 2.
### 3. Teach What You Learn
Explain the subject to someone else as if they’re hearing it for the first time. Teaching **exposes any gaps** in your understanding and **helps reinforce** the material.
### 4. Engage with LLMs and Discuss Concepts
Take advantage of tools like ChatGPT or similar large language models to **explore your topic** in greater depth. Use these tools to:
- Ask specific questions to clarify confusing points.
- Engage in discussions to simulate real-world applications of the subject.
- Generate examples or analogies that deepen your understanding.
**Tip**: Use LLMs as a study partner, but don’t rely solely on them. Combine these insights with your own critical thinking to develop a well-rounded perspective.
---
## Get Started
Ready to try the Four-Layer Method? Take 15 minutes today to map out the big picture of a topic you’re curious about—what’s it all about, and why does it matter? By building your understanding step by step, you’ll master the subject with less stress and more confidence.
-
Here are my predictions for Nostr in 2025:
**Decentralization:** The outbox and inbox communication models, sometimes referred to as the Gossip model, will become the standard across the ecosystem. By the end of 2025, all major clients will support these models, providing seamless communication and enhanced decentralization. Clients that do not adopt outbox/inbox by then will be regarded as outdated or legacy systems.
**Privacy Standards:** Major clients such as Damus and Primal will move away from NIP-04 DMs, adopting more secure protocol possibilities like NIP-17 or NIP-104. These upgrades will ensure enhanced encryption and metadata protection. Additionally, NIP-104 MLS tools will drive the development of new clients and features, providing users with unprecedented control over the privacy of their communications.
**Interoperability:** Nostr's ecosystem will become even more interconnected. Platforms like the Olas image-sharing service will expand into prominent clients such as Primal, Damus, Coracle, and Snort, alongside existing integrations with Amethyst, Nostur, and Nostrudel. Similarly, audio and video tools like Nostr Nests and Zap.stream will gain seamless integration into major clients, enabling easy participation in live events across the ecosystem.
**Adoption and Migration:** Inspired by early pioneers like Fountain and Orange Pill App, more platforms will adopt Nostr for authentication, login, and social systems. In 2025, a significant migration from a high-profile application platform with hundreds of thousands of users will transpire, doubling Nostr’s daily activity and establishing it as a cornerstone of decentralized technologies.
-
Último dia do ano, momento para tirar o pó da bola de cristal, para fazer reflexões, previsões e desejos para o próximo ano e seguintes.
Ano após ano, o Bitcoin evoluiu, foi ultrapassando etapas, tornou-se cada vez mais _mainstream_. Está cada vez mais difícil fazer previsões sobre o Bitcoin, já faltam poucas barreiras a serem ultrapassadas e as que faltam são altamente complexas ou tem um impacto profundo no sistema financeiro ou na sociedade. Estas alterações profundas tem que ser realizadas lentamente, porque uma alteração rápida poderia resultar em consequências terríveis, poderia provocar um retrocesso.
# Código do Bitcoin
No final de 2025, possivelmente vamos ter um _fork_, as discussões sobre os _covenants_ já estão avançadas, vão acelerar ainda mais. Já existe um consenso relativamente alto, a favor dos _covenants_, só falta decidir que modelo será escolhido. Penso que até ao final do ano será tudo decidido.
Depois dos _covenants,_ o próximo foco será para a criptografia post-quantum, que será o maior desafio que o Bitcoin enfrenta. Criar uma criptografia segura e que não coloque a descentralização em causa.
Espero muito de Ark, possivelmente a inovação do ano, gostaria de ver o Nostr a furar a bolha bitcoinheira e que o Cashu tivesse mais reconhecimento pelos _bitcoiners_.
Espero que surjam avanços significativos no BitVM2 e BitVMX.
Não sei o que esperar das layer 2 de Bitcoin, foram a maior desilusão de 2024. Surgiram com muita força, mas pouca coisa saiu do papel, foi uma mão cheia de nada. Uma parte dos projetos caiu na tentação da _shitcoinagem_, na criação de tokens, que tem um único objetivo, enriquecer os devs e os VCs.
Se querem ser levados a sério, têm que ser sérios.
> “À mulher de César não basta ser honesta, deve parecer honesta”
Se querem ter o apoio dos _bitcoiners_, sigam o _ethos_ do Bitcoin.
Neste ponto a atitude do pessoal da Ark é exemplar, em vez de andar a chorar no Twitter para mudar o código do Bitcoin, eles colocaram as mãos na massa e criaram o protocolo. É claro que agora está meio “coxo”, funciona com uma _multisig_ ou com os _covenants_ na Liquid. Mas eles estão a criar um produto, vão demonstrar ao mercado que o produto é bom e útil. Com a adoção, a comunidade vai perceber que o Ark necessita dos _covenants_ para melhorar a interoperabilidade e a soberania.
É este o pensamento certo, que deveria ser seguido pelos restantes e futuros projetos. É seguir aquele pensamento do J.F. Kennedy:
> “Não perguntem o que é que o vosso país pode fazer por vocês, perguntem o que é que vocês podem fazer pelo vosso país”
Ou seja, não fiquem à espera que o bitcoin mude, criem primeiro as inovações/tecnologia, ganhem adoção e depois demonstrem que a alteração do código camada base pode melhorar ainda mais o vosso projeto. A necessidade é que vai levar a atualização do código.
# Reservas Estratégicas de Bitcoin
## Bancos centrais
Com a eleição de Trump, emergiu a ideia de uma Reserva Estratégia de Bitcoin, tornou este conceito _mainstream_. Foi um _pivot_, a partir desse momento, foram enumerados os políticos de todo o mundo a falar sobre o assunto.
A Senadora Cynthia Lummis foi mais além e propôs um programa para adicionar 200 mil bitcoins à reserva ao ano, até 1 milhão de Bitcoin. Só que isto está a criar uma enorme expectativa na comunidade, só que pode resultar numa enorme desilusão. Porque no primeiro ano, o Trump em vez de comprar os 200 mil, pode apenas adicionar na reserva, os 198 mil que o Estado já tem em sua posse. Se isto acontecer, possivelmente vai resultar numa forte queda a curto prazo. Na minha opinião os bancos centrais deveriam seguir o exemplo de El Salvador, fazer um DCA diário.
Mais que comprar bitcoin, para mim, o mais importante é a criação da Reserva, é colocar o Bitcoin ao mesmo nível do ouro, o impacto para o resto do mundo será tremendo, a teoria dos jogos na sua plenitude. Muitos outros bancos centrais vão ter que comprar, para não ficarem atrás, além disso, vai transmitir uma mensagem à generalidade da população, que o Bitcoin é “afinal é algo seguro, com valor”.
Mas não foi Trump que iniciou esta teoria dos jogos, mas sim foi a primeira vítima dela. É o próprio Trump que o admite, que os EUA necessitam da reserva para não ficar atrás da China. Além disso, desde que os EUA utilizaram o dólar como uma arma, com sanção contra a Rússia, surgiram boatos de que a Rússia estaria a utilizar o Bitcoin para transações internacionais. Que foram confirmados recentemente, pelo próprio governo russo. Também há poucos dias, ainda antes deste reconhecimento público, Putin elogiou o Bitcoin, ao reconhecer que “Ninguém pode proibir o bitcoin”, defendendo como uma alternativa ao dólar. A narrativa está a mudar.
Já existem alguns países com Bitcoin, mas apenas dois o fizeram conscientemente (El Salvador e Butão), os restantes têm devido a apreensões. Hoje são poucos, mas 2025 será o início de uma corrida pelos bancos centrais. Esta corrida era algo previsível, o que eu não esperava é que acontecesse tão rápido.
![image](https://image.nostr.build/582c40adff8833111bcedd14f605f823e14dab519399be8db4fa27138ea0fff3.jpg)
## Empresas
A criação de reservas estratégicas não vai ficar apenas pelos bancos centrais, também vai acelerar fortemente nas empresas em 2025.
![image](https://image.nostr.build/35a1a869cb1434e75a3508565958511ad1ade8003b84c145886ea041d9eb6394.jpg)
Mas as empresas não vão seguir a estratégia do Saylor, vão comprar bitcoin sem alavancagem, utilizando apenas os tesouros das empresas, como uma proteção contra a inflação. Eu não sou grande admirador do Saylor, prefiro muito mais, uma estratégia conservadora, sem qualquer alavancagem. Penso que as empresas vão seguir a sugestão da BlackRock, que aconselha um alocações de 1% a 3%.
Penso que 2025, ainda não será o ano da entrada das 6 magníficas (excepto Tesla), será sobretudo empresas de pequena e média dimensão. As magníficas ainda tem uma cota muito elevada de _shareholders_ com alguma idade, bastante conservadores, que têm dificuldade em compreender o Bitcoin, foi o que aconteceu recentemente com a Microsoft.
Também ainda não será em 2025, talvez 2026, a inclusão nativamente de wallet Bitcoin nos sistema da Apple Pay e da Google Pay. Seria um passo gigante para a adoção a nível mundial.
# ETFs
Os ETFs para mim são uma incógnita, tenho demasiadas dúvidas, como será 2025. Este ano os _inflows_ foram superiores a 500 mil bitcoins, o IBIT foi o lançamento de ETF mais bem sucedido da história. O sucesso dos ETFs, deve-se a 2 situações que nunca mais se vão repetir. O mercado esteve 10 anos à espera pela aprovação dos ETFs, a procura estava reprimida, isso foi bem notório nos primeiros meses, os _inflows_ foram brutais.
Também se beneficiou por ser um mercado novo, não existia _orderbook_ de vendas, não existia um mercado interno, praticamente era só _inflows_. Agora o mercado já estabilizou, a maioria das transações já são entre clientes dos próprios ETFs. Agora só uma pequena percentagem do volume das transações diárias vai resultar em _inflows_ ou _outflows_.
Estes dois fenómenos nunca mais se vão repetir, eu não acredito que o número de _inflows_ em BTC supere os número de 2024, em dólares vai superar, mas em btc não acredito que vá superar.
Mas em 2025 vão surgir uma infindável quantidade de novos produtos, derivativos, novos ETFs de cestos com outras criptos ou cestos com ativos tradicionais. O bitcoin será adicionado em produtos financeiros já existentes no mercado, as pessoas vão passar a deter bitcoin, sem o saberem.
Com o fim da operação ChokePoint 2.0, vai surgir uma nova onda de adoção e de produtos financeiros. Possivelmente vamos ver bancos tradicionais a disponibilizar produtos ou serviços de custódia aos seus clientes.
Eu adoraria ver o crescimento da adoção do bitcoin como moeda, só que a regulamentação não vai ajudar nesse processo.
# Preço
Eu acredito que o topo deste ciclo será alcançado no primeiro semestre, posteriormente haverá uma correção. Mas desta vez, eu acredito que a correção será muito menor que as anteriores, inferior a 50%, esta é a minha expectativa. Espero estar certo.
# Stablecoins de dólar
Agora saindo um pouco do universo do Bitcoin, acho importante destacar as _stablecoins_.
No último ciclo, eu tenho dividido o tempo, entre continuar a estudar o Bitcoin e estudar o sistema financeiro, as suas dinâmicas e o comportamento humano. Isto tem sido o meu foco de reflexão, imaginar a transformação que o mundo vai sofrer devido ao padrão Bitcoin. É uma ilusão acreditar que a transição de um padrão FIAT para um padrão Bitcoin vai ser rápida, vai existir um processo transitório que pode demorar décadas.
Com a re-entrada de Trump na Casa Branca, prometendo uma política altamente protecionista, vai provocar uma forte valorização do dólar, consequentemente as restantes moedas do mundo vão derreter. Provocando uma inflação generalizada, gerando uma corrida às _stablecoins_ de dólar nos países com moedas mais fracas. Trump vai ter uma política altamente expansionista, vai exportar dólares para todo o mundo, para financiar a sua própria dívida. A desigualdade entre os pobres e ricos irá crescer fortemente, aumentando a possibilidade de conflitos e revoltas.
> “Casa onde não há pão, todos ralham e ninguém tem razão”
Será mais lenha, para alimentar a fogueira, vai gravar os conflitos geopolíticos já existentes, ficando as sociedade ainda mais polarizadas.
Eu acredito que 2025, vai haver um forte crescimento na adoção das _stablecoins_ de dólares, esse forte crescimento vai agravar o problema sistémico que são as _stablecoins_. Vai ser o início do fim das _stablecoins_, pelo menos, como nós conhecemos hoje em dia.
## Problema sistémico
O sistema FIAT não nasceu de um dia para outro, foi algo que foi construído organicamente, ou seja, foi evoluindo ao longo dos anos, sempre que havia um problema/crise, eram criadas novas regras ou novas instituições para minimizar os problemas. Nestes quase 100 anos, desde os acordos de Bretton Woods, a evolução foram tantas, tornaram o sistema financeiro altamente complexo, burocrático e nada eficiente.
Na prática é um castelo de cartas construído sobre outro castelo de cartas e que por sua vez, foi construído sobre outro castelo de cartas.
As _stablecoins_ são um problema sistémico, devido às suas reservas em dólares e o sistema financeiro não está preparado para manter isso seguro. Com o crescimento das reservas ao longo dos anos, foi se agravando o problema.
No início a Tether colocava as reservas em bancos comerciais, mas com o crescimento dos dólares sob gestão, criou um problema nos bancos comerciais, devido à reserva fracionária. Essas enormes reservas da Tether estavam a colocar em risco a própria estabilidade dos bancos.
A Tether acabou por mudar de estratégia, optou por outros ativos, preferencialmente por títulos do tesouro/obrigações dos EUA. Só que a Tether continua a crescer e não dá sinais de abrandamento, pelo contrário.
Até o próprio mundo cripto, menosprezava a gravidade do problema da Tether/_stablecoins_ para o resto do sistema financeiro, porque o _marketcap_ do cripto ainda é muito pequeno. É verdade que ainda é pequeno, mas a Tether não o é, está no top 20 dos maiores detentores de títulos do tesouros dos EUA e está ao nível dos maiores bancos centrais do mundo. Devido ao seu tamanho, está a preocupar os responsáveis/autoridades/reguladores dos EUA, pode colocar em causa a estabilidade do sistema financeiro global, que está assente nessas obrigações.
Os títulos do tesouro dos EUA são o colateral mais utilizado no mundo, tanto por bancos centrais, como por empresas, é a charneira da estabilidade do sistema financeiro. Os títulos do tesouro são um assunto muito sensível. Na recente crise no Japão, do _carry trade_, o Banco Central do Japão tentou minimizar a desvalorização do iene através da venda de títulos dos EUA. Esta operação, obrigou a uma viagem de emergência, da Secretaria do Tesouro dos EUA, Janet Yellen ao Japão, onde disponibilizou liquidez para parar a venda de títulos por parte do Banco Central do Japão. Essa forte venda estava desestabilizando o mercado.
Os principais detentores de títulos do tesouros são institucionais, bancos centrais, bancos comerciais, fundo de investimento e gestoras, tudo administrado por gestores altamente qualificados, racionais e que conhecem a complexidade do mercado de obrigações.
O mundo cripto é seu oposto, é _naife_ com muita irracionalidade e uma forte pitada de loucura, na sua maioria nem faz a mínima ideia como funciona o sistema financeiro. Essa irracionalidade pode levar a uma “corrida bancária”, como aconteceu com o UST da Luna, que em poucas horas colapsou o projeto. Em termos de escala, a Luna ainda era muito pequena, por isso, o problema ficou circunscrito ao mundo cripto e a empresas ligadas diretamente ao cripto.
Só que a Tether é muito diferente, caso exista algum FUD, que obrigue a Tether a desfazer-se de vários biliões ou dezenas de biliões de dólares em títulos num curto espaço de tempo, poderia provocar consequências terríveis em todo o sistema financeiro. A Tether é grande demais, é já um problema sistémico, que vai agravar-se com o crescimento em 2025.
Não tenham dúvidas, se existir algum problema, o Tesouro dos EUA vai impedir a venda dos títulos que a Tether tem em sua posse, para salvar o sistema financeiro. O problema é, o que vai fazer a Tether, se ficar sem acesso às venda das reservas, como fará o _redeem_ dos dólares?
Como o crescimento do Tether é inevitável, o Tesouro e o FED estão com um grande problema em mãos, o que fazer com o Tether?
Mas o problema é que o atual sistema financeiro é como um curto cobertor: Quanto tapas a cabeça, destapas os pés; Ou quando tapas os pés, destapas a cabeça. Ou seja, para resolver o problema da guarda reservas da Tether, vai criar novos problemas, em outros locais do sistema financeiro e assim sucessivamente.
### Conta mestre
Uma possível solução seria dar uma conta mestre à Tether, dando o acesso direto a uma conta no FED, semelhante à que todos os bancos comerciais têm. Com isto, a Tether deixaria de necessitar os títulos do tesouro, depositando o dinheiro diretamente no banco central. Só que isto iria criar dois novos problemas, com o Custodia Bank e com o restante sistema bancário.
O Custodia Bank luta há vários anos contra o FED, nos tribunais pelo direito a ter licença bancária para um banco com _full-reserves_. O FED recusou sempre esse direito, com a justificativa que esse banco, colocaria em risco toda a estabilidade do sistema bancário existente, ou seja, todos os outros bancos poderiam colapsar. Perante a existência em simultâneo de bancos com reserva fracionária e com _full-reserves_, as pessoas e empresas iriam optar pelo mais seguro. Isso iria provocar uma corrida bancária, levando ao colapso de todos os bancos com reserva fracionária, porque no Custodia Bank, os fundos dos clientes estão 100% garantidos, para qualquer valor. Deixaria de ser necessário limites de fundos de Garantia de Depósitos.
Eu concordo com o FED nesse ponto, que os bancos com _full-reserves_ são uma ameaça a existência dos restantes bancos. O que eu discordo do FED, é a origem do problema, o problema não está nos bancos _full-reserves_, mas sim nos que têm reserva fracionária.
O FED ao conceder uma conta mestre ao Tether, abre um precedente, o Custodia Bank irá o aproveitar, reclamando pela igualdade de direitos nos tribunais e desta vez, possivelmente ganhará a sua licença.
Ainda há um segundo problema, com os restantes bancos comerciais. A Tether passaria a ter direitos similares aos bancos comerciais, mas os deveres seriam muito diferentes. Isto levaria os bancos comerciais aos tribunais para exigir igualdade de tratamento, é uma concorrência desleal. Isto é o bom dos tribunais dos EUA, são independentes e funcionam, mesmo contra o estado. Os bancos comerciais têm custos exorbitantes devido às políticas de _compliance_, como o KYC e AML. Como o governo não vai querer aliviar as regras, logo seria a Tether, a ser obrigada a fazer o _compliance_ dos seus clientes.
A obrigação do KYC para ter _stablecoins_ iriam provocar um terramoto no mundo cripto.
Assim, é pouco provável que seja a solução para a Tether.
### FED
Só resta uma hipótese, ser o próprio FED a controlar e a gerir diretamente as _stablecoins_ de dólar, nacionalizado ou absorvendo as existentes. Seria uma espécie de CBDC. Isto iria provocar um novo problema, um problema diplomático, porque as _stablecoins_ estão a colocar em causa a soberania monetária dos outros países. Atualmente as _stablecoins_ estão um pouco protegidas porque vivem num limbo jurídico, mas a partir do momento que estas são controladas pelo governo americano, tudo muda. Os países vão exigir às autoridades americanas medidas que limitem o uso nos seus respectivos países.
Não existe uma solução boa, o sistema FIAT é um castelo de cartas, qualquer carta que se mova, vai provocar um desmoronamento noutro local. As autoridades não poderão adiar mais o problema, terão que o resolver de vez, senão, qualquer dia será tarde demais. Se houver algum problema, vão colocar a responsabilidade no cripto e no Bitcoin. Mas a verdade, a culpa é inteiramente dos políticos, da sua incompetência em resolver os problemas a tempo.
Será algo para acompanhar futuramente, mas só para 2026, talvez…
É curioso, há uns anos pensava-se que o Bitcoin seria a maior ameaça ao sistema ao FIAT, mas afinal, a maior ameaça aos sistema FIAT é o próprio FIAT(_stablecoins_). A ironia do destino.
Isto é como uma corrida, o Bitcoin é aquele atleta que corre ao seu ritmo, umas vezes mais rápido, outras vezes mais lento, mas nunca pára. O FIAT é o atleta que dá tudo desde da partida, corre sempre em velocidade máxima. Só que a vida e o sistema financeiro não é uma prova de 100 metros, mas sim uma maratona.
# Europa
2025 será um ano desafiante para todos europeus, sobretudo devido à entrada em vigor da regulamentação (MiCA). Vão começar a sentir na pele a regulamentação, vão agravar-se os problemas com os _compliance_, problemas para comprovar a origem de fundos e outras burocracias. Vai ser lindo.
O _Travel Route_ passa a ser obrigatório, os europeus serão obrigados a fazer o KYC nas transações. A _Travel Route_ é uma suposta lei para criar mais transparência, mas prática, é uma lei de controle, de monitorização e para limitar as liberdades individuais dos cidadãos.
O MiCA também está a colocar problemas nas _stablecoins_ de Euro, a Tether para já preferiu ficar de fora da europa. O mais ridículo é que as novas regras obrigam os emissores a colocar 30% das reservas em bancos comerciais. Os burocratas europeus não compreendem que isto coloca em risco a estabilidade e a solvência dos próprios bancos, ficam propensos a corridas bancárias.
O MiCA vai obrigar a todas as exchanges a estar registadas em solo europeu, ficando vulnerável ao temperamento dos burocratas. Ainda não vai ser em 2025, mas a UE vai impor políticas de controle de capitais, é inevitável, as exchanges serão obrigadas a usar em exclusividade _stablecoins_ de euro, as restantes _stablecoins_ serão deslistadas.
Todas estas novas regras do MiCA, são extremamente restritas, não é para garantir mais segurança aos cidadãos europeus, mas sim para garantir mais controle sobre a população. A UE está cada vez mais perto da autocracia, do que da democracia. A minha única esperança no horizonte, é que o sucesso das políticas cripto nos EUA, vai obrigar a UE a recuar e a aligeirar as regras, a teoria dos jogos é implacável. Mas esse recuo, nunca acontecerá em 2025, vai ser um longo período conturbado.
# Recessão
Os mercados estão todos em máximos históricos, isto não é sustentável por muito tempo, suspeito que no final de 2025 vai acontecer alguma correção nos mercados. A queda só não será maior, porque os bancos centrais vão imprimir dinheiro, muito dinheiro, como se não houvesse amanhã. Vão voltar a resolver os problemas com a injeção de liquidez na economia, é empurrar os problemas com a barriga, em de os resolver. Outra vez o efeito Cantillon.
Será um ano muito desafiante a nível político, onde o papel dos políticos será fundamental. A crise política na França e na Alemanha, coloca a UE órfã, sem um comandante ao leme do navio. 2025 estará condicionado pelas eleições na Alemanha, sobretudo no resultado do AfD, que podem colocar em causa a propriedade UE e o euro.
Possivelmente, só o fim da guerra poderia minimizar a crise, algo que é muito pouco provável acontecer.
Em Portugal, a economia parece que está mais ou menos equilibrada, mas começam a aparecer alguns sinais preocupantes. Os jogos de sorte e azar estão em máximos históricos, batendo o recorde de 2014, época da grande crise, não é um bom sinal, possivelmente já existe algum desespero no ar.
A Alemanha é o motor da Europa, quanto espirra, Portugal constipa-se. Além do problema da Alemanha, a Espanha também está à beira de uma crise, são os países que mais influenciam a economia portuguesa.
Se existir uma recessão mundial, terá um forte impacto no turismo, que é hoje em dia o principal motor de Portugal.
# Brasil
Brasil é algo para acompanhar em 2025, sobretudo a nível macro e a nível político. Existe uma possibilidade de uma profunda crise no Brasil, sobretudo na sua moeda. O banco central já anda a queimar as reservas para minimizar a desvalorização do Real.
![image](https://image.nostr.build/eadb2156339881f2358e16fd4bb443c3f63d862f4e741dd8299c73f2b76e141d.jpg)
Sem mudanças profundas nas políticas fiscais, as reservas vão se esgotar. As políticas de controle de capitais são um cenário plausível, será interesse de acompanhar, como o governo irá proceder perante a existência do Bitcoin e _stablecoins_. No Brasil existe um forte adoção, será um bom _case study_, certamente irá repetir-se em outros países num futuro próximo.
Os próximos tempos não serão fáceis para os brasileiros, especialmente para os que não têm Bitcoin.
# Blockchain
Em 2025, possivelmente vamos ver os primeiros passos da BlackRock para criar a primeira bolsa de valores, exclusivamente em _blockchain_. Eu acredito que a BlackRock vai criar uma própria _blockchain_, toda controlada por si, onde estarão os RWAs, para fazer concorrência às tradicionais bolsas de valores. Será algo interessante de acompanhar.
-----------
Estas são as minhas previsões, eu escrevi isto muito em cima do joelho, certamente esqueci-me de algumas coisas, se for importante acrescentarei nos comentários. A maioria das previsões só acontecerá após 2025, mas fica aqui a minha opinião.
Isto é apenas a minha opinião, **Don’t Trust, Verify**!
-
Na era das grandes navegações, piratas ingleses eram autorizados pelo governo para roubar navios.
A única coisa que diferenciava um pirata comum de um corsário é que o último possuía a “Carta do Corso”, que funcionava como um “Alvará para o roubo”, onde o governo Inglês legitimava o roubo de navios por parte dos corsários. É claro, que em troca ele exigia uma parte da espoliação.
Bastante similar com a maneira que a Receita Federal atua, não? Na verdade, o caso é ainda pior, pois o governo fica com toda a riqueza espoliada, e apenas repassa um mísero salário para os corsários modernos, os agentes da receita federal.
Porém eles “justificam” esse roubo ao chamá-lo de imposto, e isso parece acalmar os ânimos de grande parte da população, mas não de nós.
Não é por acaso que 'imposto' é o particípio passado do verbo 'impor'. Ou seja, é aquilo que resulta do cumprimento obrigatório -- e não voluntário -- de todos os cidadãos. Se não for 'imposto' ninguém paga. Nem mesmo seus defensores. Isso mostra o quanto as pessoas realmente apreciam os serviços do estado.
Apenas volte um pouco na história: os primeiros pagadores de impostos eram fazendeiros cujos territórios foram invadidos por nômades que pastoreavam seu gado. Esses invasores nômades forçavam os fazendeiros a lhes pagar uma fatia de sua renda em troca de "proteção". O fazendeiro que não concordasse era assassinado.
Os nômades perceberam que era muito mais interessante e confortável apenas cobrar uma taxa de proteção em vez de matar o fazendeiro e assumir suas posses. Cobrando uma taxa, eles obtinham o que necessitavam. Já se matassem os fazendeiros, eles teriam de gerenciar por conta própria toda a produção da fazenda.
Daí eles entenderam que, ao não assassinarem todos os fazendeiros que encontrassem pelo caminho, poderiam fazer desta prática um modo de vida.
Assim nasceu o governo.
Não assassinar pessoas foi o primeiro serviço que o governo forneceu. Como temos sorte em ter à nossa disposição esta instituição!
Assim, não deixa de ser curioso que algumas pessoas digam que os impostos são pagos basicamente para impedir que aconteça exatamente aquilo que originou a existência do governo. O governo nasceu da extorsão. Os fazendeiros tinham de pagar um "arrego" para seu governo. Caso contrário, eram assassinados.
Quem era a real ameaça? O governo. A máfia faz a mesma coisa.
Mas existe uma forma de se proteger desses corsários modernos. Atualmente, existe uma propriedade privada que NINGUÉM pode tirar de você, ela é sua até mesmo depois da morte. É claro que estamos falando do Bitcoin. Fazendo as configurações certas, é impossível saber que você tem bitcoin. Nem mesmo o governo americano consegue saber.
#brasil #bitcoinbrasil #nostrbrasil #grownostr #bitcoin
-
At the intersection of philosophy, theology, physics, biology, and finance lies a terrifying truth: the fiat monetary system, in its current form, is not just an economic framework but a silent, relentless force actively working against humanity's survival. It isn't simply a failed financial model—it is a systemic engine of destruction, both externally and within the very core of our biological existence.
The Philosophical Void of Fiat
Philosophy has long questioned the nature of value and the meaning of human existence. From Socrates to Kant, thinkers have pondered the pursuit of truth, beauty, and virtue. But in the modern age, the fiat system has hijacked this discourse. The notion of "value" in a fiat world is no longer rooted in human potential or natural resources—it is abstracted, manipulated, and controlled by central authorities with the sole purpose of perpetuating their own power. The currency is not a reflection of society’s labor or resources; it is a representation of faith in an authority that, more often than not, breaks that faith with reckless monetary policies and hidden inflation.
The fiat system has created a kind of ontological nihilism, where the idea of true value, rooted in work, creativity, and family, is replaced with speculative gambling and short-term gains. This betrayal of human purpose at the systemic level feeds into a philosophical despair: the relentless devaluation of effort, the erosion of trust, and the abandonment of shared human values. In this nihilistic economy, purpose and meaning become increasingly difficult to find, leaving millions to question the very foundation of their existence.
Theological Implications: Fiat and the Collapse of the Sacred
Religious traditions have long linked moral integrity with the stewardship of resources and the preservation of life. Fiat currency, however, corrupts these foundational beliefs. In the theological narrative of creation, humans are given dominion over the Earth, tasked with nurturing and protecting it for future generations. But the fiat system promotes the exact opposite: it commodifies everything—land, labor, and life—treating them as mere transactions on a ledger.
This disrespect for creation is an affront to the divine. In many theologies, creation is meant to be sustained, a delicate balance that mirrors the harmony of the divine order. Fiat systems—by continuously printing money and driving inflation—treat nature and humanity as expendable resources to be exploited for short-term gains, leading to environmental degradation and societal collapse. The creation narrative, in which humans are called to be stewards, is inverted. The fiat system, through its unholy alliance with unrestrained growth and unsustainable debt, is destroying the very creation it should protect.
Furthermore, the fiat system drives idolatry of power and wealth. The central banks and corporations that control the money supply have become modern-day gods, their decrees shaping the lives of billions, while the masses are enslaved by debt and inflation. This form of worship isn't overt, but it is profound. It leads to a world where people place their faith not in God or their families, but in the abstract promises of institutions that serve their own interests.
Physics and the Infinite Growth Paradox
Physics teaches us that the universe is finite—resources, energy, and space are all limited. Yet, the fiat system operates under the delusion of infinite growth. Central banks print money without concern for natural limits, encouraging an economy that assumes unending expansion. This is not only an economic fallacy; it is a physical impossibility.
In thermodynamics, the Second Law states that entropy (disorder) increases over time in any closed system. The fiat system operates as if the Earth were an infinite resource pool, perpetually able to expand without consequence. The real world, however, does not bend to these abstract concepts of infinite growth. Resources are finite, ecosystems are fragile, and human capacity is limited. Fiat currency, by promoting unsustainable consumption and growth, accelerates the depletion of resources and the degradation of natural systems that support life itself.
Even the financial “growth” driven by fiat policies leads to unsustainable bubbles—inflated stock markets, real estate, and speculative assets that burst and leave ruin in their wake. These crashes aren’t just economic—they have profound biological consequences. The cycles of boom and bust undermine communities, erode social stability, and increase anxiety and depression, all of which affect human health at a biological level.
Biology: The Fiat System and the Destruction of Human Health
Biologically, the fiat system is a cancerous growth on human society. The constant chase for growth and the devaluation of work leads to chronic stress, which is one of the leading causes of disease in modern society. The strain of living in a system that values speculation over well-being results in a biological feedback loop: rising anxiety, poor mental health, physical diseases like cardiovascular disorders, and a shortening of lifespans.
Moreover, the focus on profit and short-term returns creates a biological disconnect between humans and the planet. The fiat system fuels industries that destroy ecosystems, increase pollution, and deplete resources at unsustainable rates. These actions are not just environmentally harmful; they directly harm human biology. The degradation of the environment—whether through toxic chemicals, pollution, or resource extraction—has profound biological effects on human health, causing respiratory diseases, cancers, and neurological disorders.
The biological cost of the fiat system is not a distant theory; it is being paid every day by millions in the form of increased health risks, diseases linked to stress, and the growing burden of mental health disorders. The constant uncertainty of an inflation-driven economy exacerbates these conditions, creating a society of individuals whose bodies and minds are under constant strain. We are witnessing a systemic biological unraveling, one in which the very act of living is increasingly fraught with pain, instability, and the looming threat of collapse.
Finance as the Final Illusion
At the core of the fiat system is a fundamental illusion—that financial growth can occur without any real connection to tangible value. The abstraction of currency, the manipulation of interest rates, and the constant creation of new money hide the underlying truth: the system is built on nothing but faith. When that faith falters, the entire system collapses.
This illusion has become so deeply embedded that it now defines the human experience. Work no longer connects to production or creation—it is reduced to a transaction on a spreadsheet, a means to acquire more fiat currency in a world where value is ephemeral and increasingly disconnected from human reality.
As we pursue ever-expanding wealth, the fundamental truths of biology—interdependence, sustainability, and balance—are ignored. The fiat system’s abstract financial models serve to disconnect us from the basic realities of life: that we are part of an interconnected world where every action has a reaction, where resources are finite, and where human health, both mental and physical, depends on the stability of our environment and our social systems.
The Ultimate Extermination
In the end, the fiat system is not just an economic issue; it is a biological, philosophical, theological, and existential threat to the very survival of humanity. It is a force that devalues human effort, encourages environmental destruction, fosters inequality, and creates pain at the core of the human biological condition. It is an economic framework that leads not to prosperity, but to extermination—not just of species, but of the very essence of human well-being.
To continue on this path is to accept the slow death of our species, one based not on natural forces, but on our own choice to worship the abstract over the real, the speculative over the tangible. The fiat system isn't just a threat; it is the ultimate self-inflicted wound, a cultural and financial cancer that, if left unchecked, will destroy humanity’s chance for survival and peace.
-
I’ll admit that I was wrong about Bitcoin. Perhaps in 2013. Definitely 2017. Probably in 2018-2019. And maybe even today.
Being wrong about Bitcoin is part of finally understanding it. It will test you, make you question everything, and in the words of BTC educator and privacy advocate [Matt Odell](https://twitter.com/ODELL), “Bitcoin will humble you”.
I’ve had my own stumbles on the way.
In a very public fashion in 2017, after years of using Bitcoin, trying to start a company with it, using it as my primary exchange vehicle between currencies, and generally being annoying about it at parties, I let out the bear.
In an article published in my own literary magazine *Devolution Review* in September 2017, I had a breaking point. The article was titled “[Going Bearish on Bitcoin: Cryptocurrencies are the tulip mania of the 21st century](https://www.devolutionreview.com/bearish-on-bitcoin/)”.
It was later republished in *Huffington Post* and across dozens of financial and crypto blogs at the time with another, more appropriate title: “[Bitcoin Has Become About The Payday, Not Its Potential](https://www.huffpost.com/archive/ca/entry/bitcoin-has-become-about-the-payday-not-its-potential_ca_5cd5025de4b07bc72973ec2d)”.
As I laid out, my newfound bearishness had little to do with the technology itself or the promise of Bitcoin, and more to do with the cynical industry forming around it:
> In the beginning, Bitcoin was something of a revolution to me. The digital currency represented everything from my rebellious youth.
>
> It was a decentralized, denationalized, and digital currency operating outside the traditional banking and governmental system. It used tools of cryptography and connected buyers and sellers across national borders at minimal transaction costs.
>
> …
>
> The 21st-century version (of Tulip mania) has welcomed a plethora of slick consultants, hazy schemes dressed up as investor possibilities, and too much wishy-washy language for anything to really make sense to anyone who wants to use a digital currency to make purchases.
While I called out Bitcoin by name at the time, on reflection, I was really talking about the ICO craze, the wishy-washy consultants, and the altcoin ponzis.
What I was articulating — without knowing it — was the frame of NgU, or “numbers go up”. Rather than advocating for Bitcoin because of its uncensorability, proof-of-work, or immutability, the common mentality among newbies and the dollar-obsessed was that Bitcoin mattered because its price was a rocket ship.
And because Bitcoin was gaining in price, affinity tokens and projects that were imperfect forks of Bitcoin took off as well.
The price alone — rather than its qualities — were the reasons why you’d hear Uber drivers, finance bros, or your gym buddy mention Bitcoin. As someone who came to Bitcoin for philosophical reasons, that just sat wrong with me.
Maybe I had too many projects thrown in my face, or maybe I was too frustrated with the UX of Bitcoin apps and sites at the time. No matter what, I’ve since learned something.
**I was at least somewhat wrong.**
My own journey began in early 2011. One of my favorite radio programs, Free Talk Live, began interviewing guests and having discussions on the potential of Bitcoin. They tied it directly to a libertarian vision of the world: free markets, free people, and free banking. That was me, and I was in. Bitcoin was at about $5 back then (NgU).
I followed every article I could, talked about it with guests [on my college radio show](https://libertyinexile.wordpress.com/2011/05/09/osamobama_on_the_tubes/), and became a devoted redditor on r/Bitcoin. At that time, at least to my knowledge, there was no possible way to buy Bitcoin where I was living. Very weak.
**I was probably wrong. And very wrong for not trying to acquire by mining or otherwise.**
The next year, after moving to Florida, Bitcoin was a heavy topic with a friend of mine who shared the same vision (and still does, according to the Celsius bankruptcy documents). We talked about it with passionate leftists at **Occupy Tampa** in 2012, all the while trying to explain the ills of Keynesian central banking, and figuring out how to use Coinbase.
I began writing more about Bitcoin in 2013, writing a guide on “[How to Avoid Bank Fees Using Bitcoin](http://thestatelessman.com/2013/06/03/using-bitcoin/),” discussing its [potential legalization in Germany](https://yael.ca/2013/10/01/lagefi-alternative-monetaire-et-legislation-de/), and interviewing Jeremy Hansen, [one of the first political candidates in the U.S. to accept Bitcoin donations](https://yael.ca/2013/12/09/bitcoin-politician-wants-to-upgrade-democracy-in/).
Even up until that point, I thought Bitcoin was an interesting protocol for sending and receiving money quickly, and converting it into fiat. The global connectedness of it, plus this cypherpunk mentality divorced from government control was both useful and attractive. I thought it was the perfect go-between.
**But I was wrong.**
When I gave my [first public speech](https://www.youtube.com/watch?v=CtVypq2f0G4) on Bitcoin in Vienna, Austria in December 2013, I had grown obsessed with Bitcoin’s adoption on dark net markets like Silk Road.
My theory, at the time, was the number and price were irrelevant. The tech was interesting, and a novel attempt. It was unlike anything before. But what was happening on the dark net markets, which I viewed as the true free market powered by Bitcoin, was even more interesting. I thought these markets would grow exponentially and anonymous commerce via BTC would become the norm.
While the price was irrelevant, it was all about buying and selling goods without permission or license.
**Now I understand I was wrong.**
Just because Bitcoin was this revolutionary technology that embraced pseudonymity did not mean that all commerce would decentralize as well. It did not mean that anonymous markets were intended to be the most powerful layer in the Bitcoin stack.
What I did not even anticipate is something articulated very well by noted Bitcoin OG [Pierre Rochard](https://twitter.com/BitcoinPierre): [Bitcoin as a *savings technology*](https://www.youtube.com/watch?v=BavRqEoaxjI)*.*
The ability to maintain long-term savings, practice self-discipline while stacking stats, and embrace a low-time preference was just not something on the mind of the Bitcoiners I knew at the time.
Perhaps I was reading into the hype while outwardly opposing it. Or perhaps I wasn’t humble enough to understand the true value proposition that many of us have learned years later.
In the years that followed, I bought and sold more times than I can count, and I did everything to integrate it into passion projects. I tried to set up a company using Bitcoin while at my university in Prague.
My business model depended on university students being technologically advanced enough to have a mobile wallet, own their keys, and be able to make transactions on a consistent basis. Even though I was surrounded by philosophically aligned people, those who would advance that to actually put Bitcoin into practice were sparse.
This is what led me to proclaim that “[Technological Literacy is Doomed](https://www.huffpost.com/archive/ca/entry/technological-literacy-is-doomed_b_12669440)” in 2016.
**And I was wrong again.**
Indeed, since that time, the UX of Bitcoin-only applications, wallets, and supporting tech has vastly improved and onboarded millions more people than anyone thought possible. The entrepreneurship, coding excellence, and vision offered by Bitcoiners of all stripes have renewed a sense in me that this project is something built for us all — friends and enemies alike.
While many of us were likely distracted by flashy and pumpy altcoins over the years (me too, champs), most of us have returned to the Bitcoin stable.
Fast forward to today, there are entire ecosystems of creators, activists, and developers who are wholly reliant on the magic of Bitcoin’s protocol for their life and livelihood. The options are endless. The FUD is still present, but real proof of work stands powerfully against those forces.
In addition, there are now [dozens of ways to use Bitcoin privately](https://fixthemoney.substack.com/p/not-your-keys-not-your-coins-claiming) — still without custodians or intermediaries — that make it one of the most important assets for global humanity, especially in dictatorships.
This is all toward a positive arc of innovation, freedom, and pure independence. Did I see that coming? Absolutely not.
Of course, there are probably other shots you’ve missed on Bitcoin. Price predictions (ouch), the short-term inflation hedge, or the amount of institutional investment. While all of these may be erroneous predictions in the short term, we have to realize that Bitcoin is a long arc. It will outlive all of us on the planet, and it will continue in its present form for the next generation.
**Being wrong about the evolution of Bitcoin is no fault, and is indeed part of the learning curve to finally understanding it all.**
When your family or friends ask you about Bitcoin after your endless sessions explaining market dynamics, nodes, how mining works, and the genius of cryptographic signatures, try to accept that there is still so much we have to learn about this decentralized digital cash.
There are still some things you’ve gotten wrong about Bitcoin, and plenty more you’ll underestimate or get wrong in the future. That’s what makes it a beautiful journey. It’s a long road, but one that remains worth it.
-
Today I learned how to install [NVapi](https://github.com/sammcj/NVApi) to monitor my GPUs in Home Assistant.
![](https://image.nostr.build/82b86710ef613f285452f4bb6e2a30a16e722db04ec297279c5b476e0c13d9f4.png)
**NVApi** is a lightweight API designed for monitoring NVIDIA GPU utilization and enabling automated power management. It provides real-time GPU metrics, supports integration with tools like Home Assistant, and offers flexible power management and PCIe link speed management based on workload and thermal conditions.
- **GPU Utilization Monitoring**: Utilization, memory usage, temperature, fan speed, and power consumption.
- **Automated Power Limiting**: Adjusts power limits dynamically based on temperature thresholds and total power caps, configurable per GPU or globally.
- **Cross-GPU Coordination**: Total power budget applies across multiple GPUs in the same system.
- **PCIe Link Speed Management**: Controls minimum and maximum PCIe link speeds with idle thresholds for power optimization.
- **Home Assistant Integration**: Uses the built-in RESTful platform and template sensors.
## Getting the Data
```
sudo apt install golang-go
git clone https://github.com/sammcj/NVApi.git
cd NVapi
go run main.go -port 9999 -rate 1
curl http://localhost:9999/gpu
```
Response for a single GPU:
```
[
{
"index": 0,
"name": "NVIDIA GeForce RTX 4090",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 16,
"power_limit_watts": 450,
"memory_total_gb": 23.99,
"memory_used_gb": 0.46,
"memory_free_gb": 23.52,
"memory_usage_percent": 2,
"temperature": 38,
"processes": [],
"pcie_link_state": "not managed"
}
]
```
Response for multiple GPUs:
```
[
{
"index": 0,
"name": "NVIDIA GeForce RTX 3090",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 14,
"power_limit_watts": 350,
"memory_total_gb": 24,
"memory_used_gb": 0.43,
"memory_free_gb": 23.57,
"memory_usage_percent": 2,
"temperature": 36,
"processes": [],
"pcie_link_state": "not managed"
},
{
"index": 1,
"name": "NVIDIA RTX A4000",
"gpu_utilisation": 0,
"memory_utilisation": 0,
"power_watts": 10,
"power_limit_watts": 140,
"memory_total_gb": 15.99,
"memory_used_gb": 0.56,
"memory_free_gb": 15.43,
"memory_usage_percent": 3,
"temperature": 41,
"processes": [],
"pcie_link_state": "not managed"
}
]
```
# Start at Boot
Create `/etc/systemd/system/nvapi.service`:
```
[Unit]
Description=Run NVapi
After=network.target
[Service]
Type=simple
Environment="GOPATH=/home/ansible/go"
WorkingDirectory=/home/ansible/NVapi
ExecStart=/usr/bin/go run main.go -port 9999 -rate 1
Restart=always
User=ansible
# Environment="GPU_TEMP_CHECK_INTERVAL=5"
# Environment="GPU_TOTAL_POWER_CAP=400"
# Environment="GPU_0_LOW_TEMP=40"
# Environment="GPU_0_MEDIUM_TEMP=70"
# Environment="GPU_0_LOW_TEMP_LIMIT=135"
# Environment="GPU_0_MEDIUM_TEMP_LIMIT=120"
# Environment="GPU_0_HIGH_TEMP_LIMIT=100"
# Environment="GPU_1_LOW_TEMP=45"
# Environment="GPU_1_MEDIUM_TEMP=75"
# Environment="GPU_1_LOW_TEMP_LIMIT=140"
# Environment="GPU_1_MEDIUM_TEMP_LIMIT=125"
# Environment="GPU_1_HIGH_TEMP_LIMIT=110"
[Install]
WantedBy=multi-user.target
```
## Home Assistant
Add to Home Assistant `configuration.yaml` and restart HA (completely).
For a single GPU, this works:
```
sensor:
- platform: rest
name: MYPC GPU Information
resource: http://mypc:9999
method: GET
headers:
Content-Type: application/json
value_template: "{{ value_json[0].index }}"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
scan_interval: 1 # seconds
- platform: template
sensors:
mypc_gpu_0_gpu:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} GPU"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_memory:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Memory"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_power:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_power_limit:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_temperature:
friendly_name: "MYPC {{ state_attr('sensor.mypc_gpu_information', 'name') }} Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu_information', 'temperature') }}"
unit_of_measurement: "°C"
```
For multiple GPUs:
```
rest:
scan_interval: 1
resource: http://mypc:9999
sensor:
- name: "MYPC GPU0 Information"
value_template: "{{ value_json[0].index }}"
json_attributes_path: "$.0"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
- name: "MYPC GPU1 Information"
value_template: "{{ value_json[1].index }}"
json_attributes_path: "$.1"
json_attributes:
- name
- gpu_utilisation
- memory_utilisation
- power_watts
- power_limit_watts
- memory_total_gb
- memory_used_gb
- memory_free_gb
- memory_usage_percent
- temperature
- platform: template
sensors:
mypc_gpu_0_gpu:
friendly_name: "MYPC GPU0 GPU"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_memory:
friendly_name: "MYPC GPU0 Memory"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_0_power:
friendly_name: "MYPC GPU0 Power"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_power_limit:
friendly_name: "MYPC GPU0 Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_0_temperature:
friendly_name: "MYPC GPU0 Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu0_information', 'temperature') }}"
unit_of_measurement: "C"
- platform: template
sensors:
mypc_gpu_1_gpu:
friendly_name: "MYPC GPU1 GPU"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'gpu_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_1_memory:
friendly_name: "MYPC GPU1 Memory"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'memory_utilisation') }}"
unit_of_measurement: "%"
mypc_gpu_1_power:
friendly_name: "MYPC GPU1 Power"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_watts') }}"
unit_of_measurement: "W"
mypc_gpu_1_power_limit:
friendly_name: "MYPC GPU1 Power Limit"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'power_limit_watts') }}"
unit_of_measurement: "W"
mypc_gpu_1_temperature:
friendly_name: "MYPC GPU1 Temperature"
value_template: "{{ state_attr('sensor.mypc_gpu1_information', 'temperature') }}"
unit_of_measurement: "C"
```
Basic entity card:
```
type: entities
entities:
- entity: sensor.mypc_gpu_0_gpu
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_memory
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_power
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_power_limit
secondary_info: last-updated
- entity: sensor.mypc_gpu_0_temperature
secondary_info: last-updated
```
# Ansible Role
```
---
- name: install go
become: true
package:
name: golang-go
state: present
- name: git clone
git:
repo: "https://github.com/sammcj/NVApi.git"
dest: "/home/ansible/NVapi"
update: yes
force: true
# go run main.go -port 9999 -rate 1
- name: install systemd service
become: true
copy:
src: nvapi.service
dest: /etc/systemd/system/nvapi.service
- name: Reload systemd daemons, enable, and restart nvapi
become: true
systemd:
name: nvapi
daemon_reload: yes
enabled: yes
state: restarted
```
-
Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
### Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
### Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
- **Scoperta.** Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
- **Efficienza.** La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
### Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
### Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!Che cosa significherebbe trattare l'IA come uno strumento invece che come una persona?
Dall’avvio di ChatGPT, le esplorazioni in due direzioni hanno preso velocità.
La prima direzione riguarda le capacità tecniche. Quanto grande possiamo addestrare un modello? Quanto bene può rispondere alle domande del SAT? Con quanta efficienza possiamo distribuirlo?
La seconda direzione riguarda il design dell’interazione. Come comunichiamo con un modello? Come possiamo usarlo per un lavoro utile? Quale metafora usiamo per ragionare su di esso?
La prima direzione è ampiamente seguita e enormemente finanziata, e per una buona ragione: i progressi nelle capacità tecniche sono alla base di ogni possibile applicazione. Ma la seconda è altrettanto cruciale per il campo e ha enormi incognite. Siamo solo a pochi anni dall’inizio dell’era dei grandi modelli. Quali sono le probabilità che abbiamo già capito i modi migliori per usarli?
Propongo una nuova modalità di interazione, in cui i modelli svolgano il ruolo di applicazioni informatiche (ad esempio app per telefoni): fornendo un’interfaccia grafica, interpretando gli input degli utenti e aggiornando il loro stato. In questa modalità, invece di essere un “agente” che utilizza un computer per conto dell’essere umano, l’IA può fornire un ambiente informatico più ricco e potente che possiamo utilizzare.
### Metafore per l’interazione
Al centro di un’interazione c’è una metafora che guida le aspettative di un utente su un sistema. I primi giorni dell’informatica hanno preso metafore come “scrivanie”, “macchine da scrivere”, “fogli di calcolo” e “lettere” e le hanno trasformate in equivalenti digitali, permettendo all’utente di ragionare sul loro comportamento. Puoi lasciare qualcosa sulla tua scrivania e tornare a prenderlo; hai bisogno di un indirizzo per inviare una lettera. Man mano che abbiamo sviluppato una conoscenza culturale di questi dispositivi, la necessità di queste particolari metafore è scomparsa, e con esse i design di interfaccia skeumorfici che le rafforzavano. Come un cestino o una matita, un computer è ora una metafora di se stesso.
La metafora dominante per i grandi modelli oggi è modello-come-persona. Questa è una metafora efficace perché le persone hanno capacità estese che conosciamo intuitivamente. Implica che possiamo avere una conversazione con un modello e porgli domande; che il modello possa collaborare con noi su un documento o un pezzo di codice; che possiamo assegnargli un compito da svolgere da solo e che tornerà quando sarà finito.
Tuttavia, trattare un modello come una persona limita profondamente il nostro modo di pensare all’interazione con esso. Le interazioni umane sono intrinsecamente lente e lineari, limitate dalla larghezza di banda e dalla natura a turni della comunicazione verbale. Come abbiamo tutti sperimentato, comunicare idee complesse in una conversazione è difficile e dispersivo. Quando vogliamo precisione, ci rivolgiamo invece a strumenti, utilizzando manipolazioni dirette e interfacce visive ad alta larghezza di banda per creare diagrammi, scrivere codice e progettare modelli CAD. Poiché concepiamo i modelli come persone, li utilizziamo attraverso conversazioni lente, anche se sono perfettamente in grado di accettare input diretti e rapidi e di produrre risultati visivi. Le metafore che utilizziamo limitano le esperienze che costruiamo, e la metafora modello-come-persona ci impedisce di esplorare il pieno potenziale dei grandi modelli.
Per molti casi d’uso, e specialmente per il lavoro produttivo, credo che il futuro risieda in un’altra metafora: modello-come-computer.
### Usare un’IA come un computer
Sotto la metafora modello-come-computer, interagiremo con i grandi modelli seguendo le intuizioni che abbiamo sulle applicazioni informatiche (sia su desktop, tablet o telefono). Nota che ciò non significa che il modello sarà un’app tradizionale più di quanto il desktop di Windows fosse una scrivania letterale. “Applicazione informatica” sarà un modo per un modello di rappresentarsi a noi. Invece di agire come una persona, il modello agirà come un computer.
Agire come un computer significa produrre un’interfaccia grafica. Al posto del flusso lineare di testo in stile telescrivente fornito da ChatGPT, un sistema modello-come-computer genererà qualcosa che somiglia all’interfaccia di un’applicazione moderna: pulsanti, cursori, schede, immagini, grafici e tutto il resto. Questo affronta limitazioni chiave dell’interfaccia di chat standard modello-come-persona:
Scoperta. Un buon strumento suggerisce i suoi usi. Quando l’unica interfaccia è una casella di testo vuota, spetta all’utente capire cosa fare e comprendere i limiti del sistema. La barra laterale Modifica in Lightroom è un ottimo modo per imparare l’editing fotografico perché non si limita a dirti cosa può fare questa applicazione con una foto, ma cosa potresti voler fare. Allo stesso modo, un’interfaccia modello-come-computer per DALL-E potrebbe mostrare nuove possibilità per le tue generazioni di immagini.
Efficienza. La manipolazione diretta è più rapida che scrivere una richiesta a parole. Per continuare l’esempio di Lightroom, sarebbe impensabile modificare una foto dicendo a una persona quali cursori spostare e di quanto. Ci vorrebbe un giorno intero per chiedere un’esposizione leggermente più bassa e una vibranza leggermente più alta, solo per vedere come apparirebbe. Nella metafora modello-come-computer, il modello può creare strumenti che ti permettono di comunicare ciò che vuoi più efficientemente e quindi di fare le cose più rapidamente.
A differenza di un’app tradizionale, questa interfaccia grafica è generata dal modello su richiesta. Questo significa che ogni parte dell’interfaccia che vedi è rilevante per ciò che stai facendo in quel momento, inclusi i contenuti specifici del tuo lavoro. Significa anche che, se desideri un’interfaccia più ampia o diversa, puoi semplicemente richiederla. Potresti chiedere a DALL-E di produrre alcuni preset modificabili per le sue impostazioni ispirati da famosi artisti di schizzi. Quando clicchi sul preset Leonardo da Vinci, imposta i cursori per disegni prospettici altamente dettagliati in inchiostro nero. Se clicchi su Charles Schulz, seleziona fumetti tecnicolor 2D a basso dettaglio.
### Una bicicletta della mente proteiforme
La metafora modello-come-persona ha una curiosa tendenza a creare distanza tra l’utente e il modello, rispecchiando il divario di comunicazione tra due persone che può essere ridotto ma mai completamente colmato. A causa della difficoltà e del costo di comunicare a parole, le persone tendono a suddividere i compiti tra loro in blocchi grandi e il più indipendenti possibile. Le interfacce modello-come-persona seguono questo schema: non vale la pena dire a un modello di aggiungere un return statement alla tua funzione quando è più veloce scriverlo da solo. Con il sovraccarico della comunicazione, i sistemi modello-come-persona sono più utili quando possono fare un intero blocco di lavoro da soli. Fanno le cose per te.
Questo contrasta con il modo in cui interagiamo con i computer o altri strumenti. Gli strumenti producono feedback visivi in tempo reale e sono controllati attraverso manipolazioni dirette. Hanno un overhead comunicativo così basso che non è necessario specificare un blocco di lavoro indipendente. Ha più senso mantenere l’umano nel loop e dirigere lo strumento momento per momento. Come stivali delle sette leghe, gli strumenti ti permettono di andare più lontano a ogni passo, ma sei ancora tu a fare il lavoro. Ti permettono di fare le cose più velocemente.
Considera il compito di costruire un sito web usando un grande modello. Con le interfacce di oggi, potresti trattare il modello come un appaltatore o un collaboratore. Cercheresti di scrivere a parole il più possibile su come vuoi che il sito appaia, cosa vuoi che dica e quali funzionalità vuoi che abbia. Il modello genererebbe una prima bozza, tu la eseguirai e poi fornirai un feedback. “Fai il logo un po’ più grande”, diresti, e “centra quella prima immagine principale”, e “deve esserci un pulsante di login nell’intestazione”. Per ottenere esattamente ciò che vuoi, invierai una lista molto lunga di richieste sempre più minuziose.
Un’interazione alternativa modello-come-computer sarebbe diversa: invece di costruire il sito web, il modello genererebbe un’interfaccia per te per costruirlo, dove ogni input dell’utente a quell’interfaccia interroga il grande modello sotto il cofano. Forse quando descrivi le tue necessità creerebbe un’interfaccia con una barra laterale e una finestra di anteprima. All’inizio la barra laterale contiene solo alcuni schizzi di layout che puoi scegliere come punto di partenza. Puoi cliccare su ciascuno di essi, e il modello scrive l’HTML per una pagina web usando quel layout e lo visualizza nella finestra di anteprima. Ora che hai una pagina su cui lavorare, la barra laterale guadagna opzioni aggiuntive che influenzano la pagina globalmente, come accoppiamenti di font e schemi di colore. L’anteprima funge da editor WYSIWYG, permettendoti di afferrare elementi e spostarli, modificarne i contenuti, ecc. A supportare tutto ciò è il modello, che vede queste azioni dell’utente e riscrive la pagina per corrispondere ai cambiamenti effettuati. Poiché il modello può generare un’interfaccia per aiutare te e lui a comunicare più efficientemente, puoi esercitare più controllo sul prodotto finale in meno tempo.
La metafora modello-come-computer ci incoraggia a pensare al modello come a uno strumento con cui interagire in tempo reale piuttosto che a un collaboratore a cui assegnare compiti. Invece di sostituire un tirocinante o un tutor, può essere una sorta di bicicletta proteiforme per la mente, una che è sempre costruita su misura esattamente per te e il terreno che intendi attraversare.
### Un nuovo paradigma per l’informatica?
I modelli che possono generare interfacce su richiesta sono una frontiera completamente nuova nell’informatica. Potrebbero essere un paradigma del tutto nuovo, con il modo in cui cortocircuitano il modello di applicazione esistente. Dare agli utenti finali il potere di creare e modificare app al volo cambia fondamentalmente il modo in cui interagiamo con i computer. Al posto di una singola applicazione statica costruita da uno sviluppatore, un modello genererà un’applicazione su misura per l’utente e le sue esigenze immediate. Al posto della logica aziendale implementata nel codice, il modello interpreterà gli input dell’utente e aggiornerà l’interfaccia utente. È persino possibile che questo tipo di interfaccia generativa sostituisca completamente il sistema operativo, generando e gestendo interfacce e finestre al volo secondo necessità.
All’inizio, l’interfaccia generativa sarà un giocattolo, utile solo per l’esplorazione creativa e poche altre applicazioni di nicchia. Dopotutto, nessuno vorrebbe un’app di posta elettronica che occasionalmente invia email al tuo ex e mente sulla tua casella di posta. Ma gradualmente i modelli miglioreranno. Anche mentre si spingeranno ulteriormente nello spazio di esperienze completamente nuove, diventeranno lentamente abbastanza affidabili da essere utilizzati per un lavoro reale.
Piccoli pezzi di questo futuro esistono già. Anni fa Jonas Degrave ha dimostrato che ChatGPT poteva fare una buona simulazione di una riga di comando Linux. Allo stesso modo, websim.ai utilizza un LLM per generare siti web su richiesta mentre li navighi. Oasis, GameNGen e DIAMOND addestrano modelli video condizionati sull’azione su singoli videogiochi, permettendoti di giocare ad esempio a Doom dentro un grande modello. E Genie 2 genera videogiochi giocabili da prompt testuali. L’interfaccia generativa potrebbe ancora sembrare un’idea folle, ma non è così folle.
Ci sono enormi domande aperte su come apparirà tutto questo. Dove sarà inizialmente utile l’interfaccia generativa? Come condivideremo e distribuiremo le esperienze che creiamo collaborando con il modello, se esistono solo come contesto di un grande modello? Vorremmo davvero farlo? Quali nuovi tipi di esperienze saranno possibili? Come funzionerà tutto questo in pratica? I modelli genereranno interfacce come codice o produrranno direttamente pixel grezzi?
Non conosco ancora queste risposte. Dovremo sperimentare e scoprirlo!
Tradotto da:\
https://willwhitney.com/computing-inside-ai.htmlhttps://willwhitney.com/computing-inside-ai.html
-
I watched Tucker Carlson interview Roger Ver last night.
I know we have our differences with Roger, and he has some less than pleasant personality traits, but he is facing 109 years in jail for tax evasion. While the charges may be technically correct, he should be able to pay the taxes and a fine and walk free. Even if we accept he did wrong, a minor prison term such as 6 months to 2 years would be appropriate in this case.
We all know the severe penalty is an over reach by US authorities looking to make the whole crypto community scared about using any form of crypto as money.
The US and many governments know they have lost the battle of Bitcoin as a hard asset, but this happened as a result of the Nash equilibrium, whereby you are forced to play a game that doesn’t benefit you, because not playing that game disadvantages you further. I.e. Governments loose control of the asset, but that asset is able to shore up their balance sheet and prevent your economy from failing (potentially).
The war against Bitcoin (and other cryptos) as a currency, whereby you can use your Bitcoin to buy anything anywhere from a pint of milk in the local shop, to a house or car and everything in-between is a distant goal and one that is happening slowly. But it is happening and these are the new battle lines.
Part of that battle is self custody, part is tax and part are the money transmitting laws.
Roger’s case is also being used as a weapon of fear.
I don’t hate Roger, the problem I have with Bitcoin cash is that you cannot run a full node from your home and if you can’t do this, it is left to large corporations to run the blockchain. Large corporations are much easier to control and coerce than thousands, perhaps millions of individuals. Just as China banned Bitcoin mining, so in this scenario it would be possible for governments to ban full nodes and enforce that ban by shutting down companies that attempted to do so.
Also, if a currency like Bitcoin cash scaled to Visa size, then Bitcoin Cash the company would become the new Visa / Mastercard and only the technology would change. However, even Visa and Mastercard don’t keep transaction logs for years, that would require enormous amount of storage and have little benefit. Nobody needs a global ledger that keeps a record of every coffee purchased in every coffee shop since the beginning of blockchain time.
This is why Bitcoin with a layer 2 payment system like Lightning is a better proposition than large blockchain cryptos. Once a payment channel is closed, the transactions are forgotten in the same way Visa and Mastercard only keep a transaction history for 1 or 2 years.
This continues to allow the freedom for anybody, anywhere to verify the money they hold and the transactions they perform along with everybody else. We have consensus by verification.
-
Resilience is the ability to withstand shocks, adapt, and bounce back. It’s an essential quality in nature and in life. But what if we could take resilience a step further? What if, instead of merely surviving, a system could improve when faced with stress? This concept, known as anti-fragility, is not just theoretical—it’s practical. Combining two highly resilient natural tools, comfrey and biochar, reveals how we can create systems that thrive under pressure and grow stronger with each challenge.
### **Comfrey: Nature’s Champion of Resilience**
Comfrey is a plant that refuses to fail. Once its deep roots take hold, it thrives in poor soils, withstands drought, and regenerates even after being cut down repeatedly. It’s a hardy survivor, but comfrey doesn’t just endure—it contributes. Known as a dynamic accumulator, it mines nutrients from deep within the earth and brings them to the surface, making them available for other plants.
Beyond its ecological role, comfrey has centuries of medicinal use, earning the nickname "knitbone." Its leaves can heal wounds and restore health, a perfect metaphor for resilience. But as impressive as comfrey is, its true potential is unlocked when paired with another resilient force: biochar.
### **Biochar: The Silent Powerhouse of Soil Regeneration**
Biochar, a carbon-rich material made by burning organic matter in low-oxygen conditions, is a game-changer for soil health. Its unique porous structure retains water, holds nutrients, and provides a haven for beneficial microbes. Soil enriched with biochar becomes drought-resistant, nutrient-rich, and biologically active—qualities that scream resilience.
Historically, ancient civilizations in the Amazon used biochar to transform barren soils into fertile agricultural hubs. Known as *terra preta*, these soils remain productive centuries later, highlighting biochar’s remarkable staying power.
Yet, like comfrey, biochar’s potential is magnified when it’s part of a larger system.
### **The Synergy: Comfrey and Biochar Together**
Resilience turns into anti-fragility when systems go beyond mere survival and start improving under stress. Combining comfrey and biochar achieves exactly that.
1. **Nutrient Cycling and Retention**\
Comfrey’s leaves, rich in nitrogen, potassium, and phosphorus, make an excellent mulch when cut and dropped onto the soil. However, these nutrients can wash away in heavy rains. Enter biochar. Its porous structure locks in the nutrients from comfrey, preventing runoff and keeping them available for plants. Together, they create a system that not only recycles nutrients but amplifies their effectiveness.
2. **Water Management**\
Biochar holds onto water making soil not just drought-resistant but actively water-efficient, improving over time with each rain and dry spell.
3. **Microbial Ecosystems**\
Comfrey enriches soil with organic matter, feeding microbial life. Biochar provides a home for these microbes, protecting them and creating a stable environment for them to multiply. Together, they build a thriving soil ecosystem that becomes more fertile and resilient with each passing season.
Resilient systems can withstand shocks, but anti-fragile systems actively use those shocks to grow stronger. Comfrey and biochar together form an anti-fragile system. Each addition of biochar enhances water and nutrient retention, while comfrey regenerates biomass and enriches the soil. Over time, the system becomes more productive, less dependent on external inputs, and better equipped to handle challenges.
This synergy demonstrates the power of designing systems that don’t just survive—they thrive.
### **Lessons Beyond the Soil**
The partnership of comfrey and biochar offers a valuable lesson for our own lives. Resilience is an admirable trait, but anti-fragility takes us further. By combining complementary strengths and leveraging stress as an opportunity, we can create systems—whether in soil, business, or society—that improve under pressure.
Nature shows us that resilience isn’t the end goal. When we pair resilient tools like comfrey and biochar, we unlock a system that evolves, regenerates, and becomes anti-fragile. By designing with anti-fragility in mind, we don’t just bounce back, we bounce forward.
By designing with anti-fragility in mind, we don’t just bounce back, we bounce forward.
-
I started a long series of articles about how to model different types of knowledge graphs in the relational model, which makes on-device memory models for AI agents possible.
We model-directed graphs
Also, graphs of entities
We even model hypergraphs
Last time, we discussed why classical triple and simple knowledge graphs are insufficient for AI agents and complex memory, especially in the domain of time-aware or multi-model knowledge.
So why do we need metagraphs, and what kind of challenge could they help us to solve?
- complex and nested event and temporal context and temporal relations as edges
- multi-mode and multilingual knowledge
- human-like memory for AI agents that has multiple contexts and relations between knowledge in neuron-like networks
## MetaGraphs
A meta graph is a concept that extends the idea of a graph by allowing edges to become graphs. Meta Edges connect a set of nodes, which could also be subgraphs. So, at some level, node and edge are pretty similar in properties but act in different roles in a different context.
Also, in some cases, edges could be referenced as nodes.
This approach enables the representation of more complex relationships and hierarchies than a traditional graph structure allows. Let’s break down each term to understand better metagraphs and how they differ from hypergraphs and graphs.
## Graph Basics
- A standard **graph** has a set of **nodes** (or vertices) and **edges** (connections between nodes).
- Edges are generally simple and typically represent a binary relationship between two nodes.
- For instance, an edge in a social network graph might indicate a “friend” relationship between two people (nodes).
## Hypergraph
- A **hypergraph** extends the concept of an edge by allowing it to connect any number of nodes, not just two.
- Each connection, called a **hyperedge**, can link multiple nodes.
- This feature allows hypergraphs to model more complex relationships involving multiple entities simultaneously. For example, a hyperedge in a hypergraph could represent a project team, connecting all team members in a single relation.
- Despite its flexibility, a hypergraph doesn’t capture hierarchical or nested structures; it only generalizes the number of connections in an edge.
## Metagraph
- A **metagraph** allows the edges to be graphs themselves. This means each edge can contain its own nodes and edges, creating nested, hierarchical structures.
- In a meta graph, an edge could represent a relationship defined by a graph. For instance, a meta graph could represent a network of organizations where each organization’s structure (departments and connections) is represented by its own internal graph and treated as an edge in the larger meta graph.
- This recursive structure allows metagraphs to model complex data with multiple layers of abstraction. They can capture multi-node relationships (as in hypergraphs) and detailed, structured information about each relationship.
## Named Graphs and Graph of Graphs
As you can notice, the structure of a metagraph is quite complex and could be complex to model in relational and classical RDF setups. It could create a challenge of luck of tools and software solutions for your problem.
If you need to model nested graphs, you could use a much simpler model of Named graphs, which could take you quite far.
![](https://miro.medium.com/v2/resize:fit:1400/1*t2TLvy8pYmmUnLJUUwwvDQ.png)
The concept of the named graph came from the RDF community, which needed to group some sets of triples. In this way, you form subgraphs inside an existing graph. You could refer to the subgraph as a regular node. This setup simplifies complex graphs, introduces hierarchies, and even adds features and properties of hypergraphs while keeping a directed nature.
It looks complex, but it is not so hard to model it with a slight modification of a directed graph.
So, the node could host graphs inside. Let's reflect this fact with a location for a node. If a node belongs to a main graph, we could set the location to null or introduce a main node . it is up to you
![](https://miro.medium.com/v2/resize:fit:1088/1*agDR_q80JJfxjGyj1bFBqg.png)
Nodes could have edges to nodes in different subgraphs. This structure allows any kind of nesting graphs. Edges stay location-free
## Meta Graphs in Relational Model
Let’s try to make several attempts to model different meta-graphs with some constraints.
## Directed Metagraph where edges are not used as nodes and could not contain subgraphs
![](https://miro.medium.com/v2/resize:fit:1400/1*xAVf4LeuMHhXynqrwfkNWA.png)
In this case, the edge always points to two sets of nodes. This introduces an overhead of creating a node set for a single node. In this model, we can model empty node sets that could require application-level constraints to prevent such cases.
## Directed Metagraph where edges are not used as nodes and could contain subgraphs
![](https://miro.medium.com/v2/resize:fit:1400/1*Ra5_LtYGlbTidGn3w8gYEg.png)
Adding a node set that could model a subgraph located in an edge is easy but could be separate from in-vertex or out-vert.
I also do not see a direct need to include subgraphs to a node, as we could just use a node set interchangeably, but it still could be a case.
## Directed Metagraph where edges are used as nodes and could contain subgraphs
As you can notice, we operate all the time with node sets. We could simply allow the extension node set to elements set that include node and edge IDs, but in this case, we need to use uuid or any other strategy to differentiate node IDs from edge IDs. In this case, we have a collision of ephemeral edges or ephemeral nodes when we want to change the role and purpose of the node as an edge or vice versa.
![](https://miro.medium.com/v2/resize:fit:1400/1*1jggQlCU-aYO_wOb2q6EXA.png)
A full-scale metagraph model is way too complex for a relational database.
So we need a better model.
Now, we have more flexibility but loose structural constraints. We cannot show that the element should have one vertex, one vertex, or both. This type of constraint has been moved to the application level. Also, the crucial question is about query and retrieval needs.
Any meta-graph model should be more focused on domain and needs and should be used in raw form. We did it for a pure theoretical purpose.
-
Hey folks! Today, let’s dive into the intriguing world of neurosymbolic approaches, retrieval-augmented generation (RAG), and personal knowledge graphs (PKGs). Together, these concepts hold much potential for bringing true reasoning capabilities to large language models (LLMs). So, let’s break down how symbolic logic, knowledge graphs, and modern AI can come together to empower future AI systems to reason like humans.
## The Neurosymbolic Approach: What It Means ?
Neurosymbolic AI combines two historically separate streams of artificial intelligence: symbolic reasoning and neural networks. Symbolic AI uses formal logic to process knowledge, similar to how we might solve problems or deduce information. On the other hand, neural networks, like those underlying GPT-4, focus on learning patterns from vast amounts of data — they are probabilistic statistical models that excel in generating human-like language and recognizing patterns but often lack deep, explicit reasoning.
While GPT-4 can produce impressive text, it’s still not very effective at reasoning in a truly logical way. Its foundation, transformers, allows it to excel in pattern recognition, but the models struggle with reasoning because, at their core, they rely on statistical probabilities rather than true symbolic logic. This is where neurosymbolic methods and knowledge graphs come in.
## Symbolic Calculations and the Early Vision of AI
If we take a step back to the 1950s, the vision for artificial intelligence was very different. Early AI research was all about symbolic reasoning — where computers could perform logical calculations to derive new knowledge from a given set of rules and facts. Languages like **Lisp** emerged to support this vision, enabling programs to represent data and code as interchangeable symbols. Lisp was designed to be homoiconic, meaning it treated code as manipulatable data, making it capable of self-modification — a huge leap towards AI systems that could, in theory, understand and modify their own operations.
## Lisp: The Earlier AI-Language
**Lisp**, short for “LISt Processor,” was developed by John McCarthy in 1958, and it became the cornerstone of early AI research. Lisp’s power lay in its flexibility and its use of symbolic expressions, which allowed developers to create programs that could manipulate symbols in ways that were very close to human reasoning. One of the most groundbreaking features of Lisp was its ability to treat code as data, known as homoiconicity, which meant that Lisp programs could introspect and transform themselves dynamically. This ability to adapt and modify its own structure gave Lisp an edge in tasks that required a form of self-awareness, which was key in the early days of AI when researchers were exploring what it meant for machines to “think.”
Lisp was not just a programming language—it represented the vision for artificial intelligence, where machines could evolve their understanding and rewrite their own programming. This idea formed the conceptual basis for many of the self-modifying and adaptive algorithms that are still explored today in AI research. Despite its decline in mainstream programming, Lisp’s influence can still be seen in the concepts used in modern machine learning and symbolic AI approaches.
## Prolog: Formal Logic and Deductive Reasoning
In the 1970s, **Prolog** was developed—a language focused on formal logic and deductive reasoning. Unlike Lisp, based on lambda calculus, Prolog operates on formal logic rules, allowing it to perform deductive reasoning and solve logical puzzles. This made Prolog an ideal candidate for expert systems that needed to follow a sequence of logical steps, such as medical diagnostics or strategic planning.
Prolog, like Lisp, allowed symbols to be represented, understood, and used in calculations, creating another homoiconic language that allows reasoning. Prolog’s strength lies in its rule-based structure, which is well-suited for tasks that require logical inference and backtracking. These features made it a powerful tool for expert systems and AI research in the 1970s and 1980s.
The language is declarative in nature, meaning that you define the problem, and Prolog figures out **how** to solve it. By using formal logic and setting constraints, Prolog systems can derive conclusions from known facts, making it highly effective in fields requiring explicit logical frameworks, such as legal reasoning, diagnostics, and natural language understanding. These symbolic approaches were later overshadowed during the AI winter — but the ideas never really disappeared. They just evolved.
## Solvers and Their Role in Complementing LLMs
One of the most powerful features of **Prolog** and similar logic-based systems is their use of **solvers**. Solvers are mechanisms that can take a set of rules and constraints and automatically find solutions that satisfy these conditions. This capability is incredibly useful when combined with LLMs, which excel at generating human-like language but need help with logical consistency and structured reasoning.
For instance, imagine a scenario where an LLM needs to answer a question involving multiple logical steps or a complex query that requires deducing facts from various pieces of information. In this case, a **solver** can derive valid conclusions based on a given set of logical rules, providing structured answers that the LLM can then articulate in natural language. This allows the LLM to retrieve information and ensure the logical integrity of its responses, leading to much more robust answers.
Solvers are also ideal for handling **constraint satisfaction problems** — situations where multiple conditions must be met simultaneously. In practical applications, this could include scheduling tasks, generating optimal recommendations, or even diagnosing issues where a set of symptoms must match possible diagnoses. Prolog’s solver capabilities and LLM’s natural language processing power can make these systems highly effective at providing intelligent, rule-compliant responses that traditional LLMs would struggle to produce alone.
By integrating **neurosymbolic methods** that utilize solvers, we can provide LLMs with a form of deductive reasoning that is missing from pure deep-learning approaches. This combination has the potential to significantly improve the quality of outputs for use-cases that require explicit, structured problem-solving, from legal queries to scientific research and beyond. Solvers give LLMs the backbone they need to not just generate answers but to do so in a way that respects logical rigor and complex constraints.
## Graph of Rules for Enhanced Reasoning
Another powerful concept that complements LLMs is using a **graph of rules**. A graph of rules is essentially a structured collection of logical rules that interconnect in a network-like structure, defining how various entities and their relationships interact. This structured network allows for complex reasoning and information retrieval, as well as the ability to model intricate relationships between different pieces of knowledge.
In a **graph of rules**, each node represents a rule, and the edges define relationships between those rules — such as dependencies or causal links. This structure can be used to enhance LLM capabilities by providing them with a formal set of rules and relationships to follow, which improves logical consistency and reasoning depth. When an LLM encounters a problem or a question that requires multiple logical steps, it can traverse this graph of rules to generate an answer that is not only linguistically fluent but also logically robust.
For example, in a healthcare application, a graph of rules might include nodes for medical symptoms, possible diagnoses, and recommended treatments. When an LLM receives a query regarding a patient’s symptoms, it can use the graph to traverse from symptoms to potential diagnoses and then to treatment options, ensuring that the response is coherent and medically sound. The graph of rules guides reasoning, enabling LLMs to handle complex, multi-step questions that involve chains of reasoning, rather than merely generating surface-level responses.
Graphs of rules also enable **modular reasoning**, where different sets of rules can be activated based on the context or the type of question being asked. This modularity is crucial for creating adaptive AI systems that can apply specific sets of logical frameworks to distinct problem domains, thereby greatly enhancing their versatility. The combination of **neural fluency** with **rule-based structure** gives LLMs the ability to conduct more advanced reasoning, ultimately making them more reliable and effective in domains where accuracy and logical consistency are critical.
By implementing a graph of rules, LLMs are empowered to perform **deductive reasoning** alongside their generative capabilities, creating responses that are not only compelling but also logically aligned with the structured knowledge available in the system. This further enhances their potential applications in fields such as law, engineering, finance, and scientific research — domains where logical consistency is as important as linguistic coherence.
## Enhancing LLMs with Symbolic Reasoning
Now, with LLMs like GPT-4 being mainstream, there is an emerging need to add real reasoning capabilities to them. This is where **neurosymbolic approaches** shine. Instead of pitting neural networks against symbolic reasoning, these methods combine the best of both worlds. The neural aspect provides language fluency and recognition of complex patterns, while the symbolic side offers real reasoning power through formal logic and rule-based frameworks.
**Personal Knowledge Graphs (PKGs)** come into play here as well. Knowledge graphs are data structures that encode entities and their relationships — they’re essentially semantic networks that allow for structured information retrieval. When integrated with neurosymbolic approaches, LLMs can use these graphs to answer questions in a far more contextual and precise way. By retrieving relevant information from a knowledge graph, they can ground their responses in well-defined relationships, thus improving both the relevance and the logical consistency of their answers.
Imagine combining an LLM with a **graph of rules** that allow it to reason through the relationships encoded in a personal knowledge graph. This could involve using **deductive databases** to form a sophisticated way to represent and reason with symbolic data — essentially constructing a powerful hybrid system that uses LLM capabilities for language fluency and rule-based logic for structured problem-solving.
## My Research on Deductive Databases and Knowledge Graphs
I recently did some research on modeling **knowledge graphs using deductive databases**, such as DataLog — which can be thought of as a limited, data-oriented version of Prolog. What I’ve found is that it’s possible to use formal logic to model knowledge graphs, ontologies, and complex relationships elegantly as rules in a deductive system. Unlike classical RDF or traditional ontology-based models, which sometimes struggle with complex or evolving relationships, a deductive approach is more flexible and can easily support dynamic rules and reasoning.
**Prolog** and similar logic-driven frameworks can complement LLMs by handling the parts of reasoning where explicit rule-following is required. LLMs can benefit from these rule-based systems for tasks like entity recognition, logical inferences, and constructing or traversing knowledge graphs. We can even create a **graph of rules** that governs how relationships are formed or how logical deductions can be performed.
The future is really about creating an AI that is capable of both deep contextual understanding (using the powerful generative capacity of LLMs) and true reasoning (through symbolic systems and knowledge graphs). With the neurosymbolic approach, these AIs could be equipped not just to generate information but to explain their reasoning, form logical conclusions, and even improve their own understanding over time — getting us a step closer to true artificial general intelligence.
## Why It Matters for LLM Employment
Using **neurosymbolic RAG (retrieval-augmented generation)** in conjunction with personal knowledge graphs could revolutionize how LLMs work in real-world applications. Imagine an LLM that understands not just language but also the relationships between different concepts — one that can navigate, reason, and explain complex knowledge domains by actively engaging with a personalized set of facts and rules.
This could lead to practical applications in areas like healthcare, finance, legal reasoning, or even personal productivity — where LLMs can help users solve complex problems logically, providing relevant information and well-justified reasoning paths. The combination of **neural fluency** with **symbolic accuracy and deductive power** is precisely the bridge we need to move beyond purely predictive AI to truly intelligent systems.
Let's explore these ideas further if you’re as fascinated by this as I am. Feel free to reach out, follow my YouTube channel, or check out some articles I’ll link below. And if you’re working on anything in this field, I’d love to collaborate!
Until next time, folks. Stay curious, and keep pushing the boundaries of AI!
-
## Introduction: Personal Knowledge Graphs and Linked Data
We will explore the world of personal knowledge graphs and discuss how they can be used to model complex information structures. Personal knowledge graphs aren’t just abstract collections of nodes and edges—they encode meaningful relationships, contextualizing data in ways that enrich our understanding of it. While the core structure might be a directed graph, we layer semantic meaning on top, enabling nuanced connections between data points.
The origin of knowledge graphs is deeply tied to concepts from linked data and the semantic web, ideas that emerged to better link scattered pieces of information across the web. This approach created an infrastructure where data islands could connect — facilitating everything from more insightful AI to improved personal data management.
In this article, we will explore how these ideas have evolved into tools for modeling AI’s semantic memory and look at how knowledge graphs can serve as a flexible foundation for encoding rich data contexts. We’ll specifically discuss three major paradigms: RDF (Resource Description Framework), property graphs, and a third way of modeling entities as graphs of graphs. Let’s get started.
## Intro to RDF
The Resource Description Framework (RDF) has been one of the fundamental standards for linked data and knowledge graphs. RDF allows data to be modeled as triples: subject, predicate, and object. Essentially, you can think of it as a structured way to describe relationships: “X has a Y called Z.” For instance, “Berlin has a population of 3.5 million.” This modeling approach is quite flexible because RDF uses unique identifiers — usually URIs — to point to data entities, making linking straightforward and coherent.
RDFS, or RDF Schema, extends RDF to provide a basic vocabulary to structure the data even more. This lets us describe not only individual nodes but also relationships among types of data entities, like defining a class hierarchy or setting properties. For example, you could say that “Berlin” is an instance of a “City” and that cities are types of “Geographical Entities.” This kind of organization helps establish semantic meaning within the graph.
## RDF and Advanced Topics
## Lists and Sets in RDF
RDF also provides tools to model more complex data structures such as lists and sets, enabling the grouping of nodes. This extension makes it easier to model more natural, human-like knowledge, for example, describing attributes of an entity that may have multiple values. By adding RDF Schema and OWL (Web Ontology Language), you gain even more expressive power — being able to define logical rules or even derive new relationships from existing data.
## Graph of Graphs
A significant feature of RDF is the ability to form complex nested structures, often referred to as graphs of graphs. This allows you to create “named graphs,” essentially subgraphs that can be independently referenced. For example, you could create a named graph for a particular dataset describing Berlin and another for a different geographical area. Then, you could connect them, allowing for more modular and reusable knowledge modeling.
## Property Graphs
While RDF provides a robust framework, it’s not always the easiest to work with due to its heavy reliance on linking everything explicitly. This is where property graphs come into play. Property graphs are less focused on linking everything through triples and allow more expressive properties directly within nodes and edges.
For example, instead of using triples to represent each detail, a property graph might let you store all properties about an entity (e.g., “Berlin”) directly in a single node. This makes property graphs more intuitive for many developers and engineers because they more closely resemble object-oriented structures: you have entities (nodes) that possess attributes (properties) and are connected to other entities through relationships (edges).
The significant benefit here is a condensed representation, which speeds up traversal and queries in some scenarios. However, this also introduces a trade-off: while property graphs are more straightforward to query and maintain, they lack some complex relationship modeling features RDF offers, particularly when connecting properties to each other.
## Graph of Graphs and Subgraphs for Entity Modeling
A third approach — which takes elements from RDF and property graphs — involves modeling entities using subgraphs or nested graphs. In this model, each entity can be represented as a graph. This allows for a detailed and flexible description of attributes without exploding every detail into individual triples or lump them all together into properties.
For instance, consider a person entity with a complex employment history. Instead of representing every employment detail in one node (as in a property graph), or as several linked nodes (as in RDF), you can treat the employment history as a subgraph. This subgraph could then contain nodes for different jobs, each linked with specific properties and connections. This approach keeps the complexity where it belongs and provides better flexibility when new attributes or entities need to be added.
## Hypergraphs and Metagraphs
When discussing more advanced forms of graphs, we encounter hypergraphs and metagraphs. These take the idea of relationships to a new level. A hypergraph allows an edge to connect more than two nodes, which is extremely useful when modeling scenarios where relationships aren’t just pairwise. For example, a “Project” could connect multiple “People,” “Resources,” and “Outcomes,” all in a single edge. This way, hypergraphs help in reducing the complexity of modeling high-order relationships.
Metagraphs, on the other hand, enable nodes and edges to themselves be represented as graphs. This is an extremely powerful feature when we consider the needs of artificial intelligence, as it allows for the modeling of relationships between relationships, an essential aspect for any system that needs to capture not just facts, but their interdependencies and contexts.
## Balancing Structure and Properties
One of the recurring challenges when modeling knowledge is finding the balance between structure and properties. With RDF, you get high flexibility and standardization, but complexity can quickly escalate as you decompose everything into triples. Property graphs simplify the representation by using attributes but lose out on the depth of connection modeling. Meanwhile, the graph-of-graphs approach and hypergraphs offer advanced modeling capabilities at the cost of increased computational complexity.
So, how do you decide which model to use? It comes down to your use case. RDF and nested graphs are strong contenders if you need deep linkage and are working with highly variable data. For more straightforward, engineer-friendly modeling, property graphs shine. And when dealing with very complex multi-way relationships or meta-level knowledge, hypergraphs and metagraphs provide the necessary tools.
The key takeaway is that only some approaches are perfect. Instead, it’s all about the modeling goals: how do you want to query the graph, what relationships are meaningful, and how much complexity are you willing to manage?
## Conclusion
Modeling AI semantic memory using knowledge graphs is a challenging but rewarding process. The different approaches — RDF, property graphs, and advanced graph modeling techniques like nested graphs and hypergraphs — each offer unique strengths and weaknesses. Whether you are building a personal knowledge graph or scaling up to AI that integrates multiple streams of linked data, it’s essential to understand the trade-offs each approach brings.
In the end, the choice of representation comes down to the nature of your data and your specific needs for querying and maintaining semantic relationships. The world of knowledge graphs is vast, with many tools and frameworks to explore. Stay connected and keep experimenting to find the balance that works for your projects.
-
The temporal semantics and **temporal and time-aware knowledge graphs. We have different memory models for artificial intelligence agents. We all try to mimic somehow how the brain works, or at least how the declarative memory of the brain works. We have the split of episodic memory** and **semantic memory**. And we also have a lot of theories, right?
## Declarative Memory of the Human Brain
How is the semantic memory formed? We all know that our brain stores semantic memory quite close to the concept we have with the personal knowledge graphs, that it’s connected entities. They form a connection with each other and all those things. So far, so good. And actually, then we have a lot of concepts, how the episodic memory and our experiences gets transmitted to the semantic:
- hippocampus indexing and retrieval
- sanitization of episodic memories
- episodic-semantic shift theory
They all give a different perspective on how different parts of declarative memory cooperate.
We know that episodic memories get semanticized over time. You have semantic knowledge without the notion of time, and probably, your episodic memory is just decayed.
But, you know, it’s still an open question:
> do we want to mimic an AI agent’s memory as a human brain memory, or do we want to create something different?
It’s an open question to which we have no good answer. And if you go to the theory of neuroscience and check how episodic and semantic memory interfere, you will still find a lot of theories, yeah?
Some of them say that you have the hippocampus that keeps the indexes of the memory. Some others will say that you semantic the episodic memory. Some others say that you have some separate process that digests the episodic and experience to the semantics. But all of them agree on the plan that it’s operationally two separate areas of memories and even two separate regions of brain, and the semantic, it’s more, let’s say, protected.
So it’s harder to forget the semantical facts than the episodes and everything. And what I’m thinking about for a long time, it’s this, you know, the semantic memory.
## Temporal Semantics
It’s memory about the facts, but you somehow mix the time information with the semantics. I already described a lot of things, including how we could combine time with knowledge graphs and how people do it.
There are multiple ways we could persist such information, but we all hit the wall because the complexity of time and the semantics of time are highly complex concepts.
## Time in a Semantic context is not a timestamp.
What I mean is that when you have a fact, and you just mentioned that I was there at this particular moment, like, I don’t know, 15:40 on Monday, it’s already awake because we don’t know which Monday, right? So you need to give the exact date, but usually, you do not have experiences like that.
You do not record your memories like that, except you do the journaling and all of the things. So, usually, you have no direct time references. What I mean is that you could say that I was there and it was some event, blah, blah, blah.
Somehow, we form a chain of events that connect with each other and maybe will be connected to some period of time if we are lucky enough. This means that we could not easily represent temporal-aware information as just a timestamp or validity and all of the things.
For sure, the validity of the knowledge graphs (simple quintuple with start and end dates)is a big topic, and it could solve a lot of things. It could solve a lot of the time cases. It’s super simple because you give the end and start dates, and you are done, but it does not answer facts that have a relative time or time information in facts . It could solve many use cases but struggle with facts in an indirect temporal context. I like the simplicity of this idea. But the problem of this approach that in most cases, we simply don’t have these timestamps. We don’t have the timestamp where this information starts and ends. And it’s not modeling many events in our life, especially if you have the processes or ongoing activities or recurrent events.
I’m more about thinking about the time of semantics, where you have a time model as a **hybrid clock** or some **global clock** that does the partial ordering of the events. It’s mean that you have the chain of the experiences and you have the chain of the facts that have the different time contexts.
We could deduct the time from this chain of the events. But it’s a big, big topic for the research. But what I want to achieve, actually, it’s not separation on episodic and semantic memory. It’s having something in between.
## Blockchain of connected events and facts
I call it temporal-aware semantics or time-aware knowledge graphs, where we could encode the semantic fact together with the time component.I doubt that time should be the simple timestamp or the region of the two timestamps. For me, it is more a chain for facts that have a partial order and form a blockchain like a database or a partially ordered Acyclic graph of facts that are temporally connected. We could have some notion of time that is understandable to the agent and a model that allows us to order the events and focus on what the agent knows and how to order this time knowledge and create the chains of the events.
## Time anchors
We may have a particular time in the chain that allows us to arrange a more concrete time for the rest of the events. But it’s still an open topic for research. The temporal semantics gets split into a couple of domains. One domain is how to add time to the knowledge graphs. We already have many different solutions. I described them in my previous articles.
Another domain is the agent's memory and how the memory of the artificial intelligence treats the time. This one, it’s much more complex. Because here, we could not operate with the simple timestamps. We need to have the representation of time that are understandable by model and understandable by the agent that will work with this model. And this one, it’s way bigger topic for the research.”
-
*Bitcoin and Fixed Income was Written By Wyatt O’Rourke. If you enjoyed this article then support his writing, directly, by donating to his lightning wallet: ultrahusky3@primal.net*
Fiduciary duty is the obligation to act in the client’s best interests at all times, prioritizing their needs above the advisor’s own, ensuring honesty, transparency, and avoiding conflicts of interest in all recommendations and actions.
This is something all advisors in the BFAN take very seriously; after all, we are legally required to do so. For the average advisor this is a fairly easy box to check. All you essentially have to do is have someone take a 5-minute risk assessment, fill out an investment policy statement, and then throw them in the proverbial 60/40 portfolio. You have thousands of investment options to choose from and you can reasonably explain how your client is theoretically insulated from any move in the \~markets\~. From the traditional financial advisor perspective, you could justify nearly anything by putting a client into this type of portfolio. All your bases were pretty much covered from return profile, regulatory, compliance, investment options, etc. It was just too easy. It became the household standard and now a meme.
As almost every real bitcoiner knows, the 60/40 portfolio is moving into psyop territory, and many financial advisors get clowned on for defending this relic on bitcoin twitter. I’m going to specifically poke fun at the ‘40’ part of this portfolio.
The ‘40’ represents fixed income, defined as…
> An investment type that provides regular, set interest payments, such as bonds or treasury securities, and returns the principal at maturity. It’s generally considered a lower-risk asset class, used to generate stable income and preserve capital.
Historically, this part of the portfolio was meant to weather the volatility in the equity markets and represent the “safe” investments. Typically, some sort of bond.
First and foremost, the fixed income section is most commonly constructed with U.S. Debt. There are a couple main reasons for this. Most financial professionals believe the same fairy tale that U.S. Debt is “risk free” (lol). U.S. debt is also one of the largest and most liquid assets in the market which comes with a lot of benefits.
There are many brilliant bitcoiners in finance and economics that have sounded the alarm on the U.S. debt ticking time bomb. I highly recommend readers explore the work of Greg Foss, Lawrence Lepard, Lyn Alden, and Saifedean Ammous. My very high-level recap of their analysis:
- A bond is a contract in which Party A (the borrower) agrees to repay Party B (the lender) their principal plus interest over time.
- The U.S. government issues bonds (Treasury securities) to finance its operations after tax revenues have been exhausted.
- These are traditionally viewed as “risk-free” due to the government’s historical reliability in repaying its debts and the strength of the U.S. economy
- U.S. bonds are seen as safe because the government has control over the dollar (world reserve asset) and, until recently (20 some odd years), enjoyed broad confidence that it would always honor its debts.
- This perception has contributed to high global demand for U.S. debt but, that is quickly deteriorating.
- The current debt situation raises concerns about sustainability.
- The U.S. has substantial obligations, and without sufficient productivity growth, increasing debt may lead to a cycle where borrowing to cover interest leads to more debt.
- This could result in more reliance on money creation (printing), which can drive inflation and further debt burdens.
In the words of Lyn Alden “Nothing stops this train”
Those obligations are what makes up the 40% of most the fixed income in your portfolio. So essentially you are giving money to one of the worst capital allocators in the world (U.S. Gov’t) and getting paid back with printed money.
As someone who takes their fiduciary responsibility seriously and understands the debt situation we just reviewed, I think it’s borderline negligent to put someone into a classic 60% (equities) / 40% (fixed income) portfolio without serious scrutiny of the client’s financial situation and options available to them. I certainly have my qualms with equities at times, but overall, they are more palatable than the fixed income portion of the portfolio. I don’t like it either, but the money is broken and the unit of account for nearly every equity or fixed income instrument (USD) is fraudulent. It’s a paper mache fade that is quite literally propped up by the money printer.
To briefly be as most charitable as I can – It wasn’t always this way. The U.S. Dollar used to be sound money, we used to have government surplus instead of mathematically certain deficits, The U.S. Federal Government didn’t used to have a money printing addiction, and pre-bitcoin the 60/40 portfolio used to be a quality portfolio management strategy. Those times are gone.
### Now the fun part. How does bitcoin fix this?
Bitcoin fixes this indirectly. Understanding investment criteria changes via risk tolerance, age, goals, etc. A client may still have a need for “fixed income” in the most literal definition – Low risk yield. Now you may be thinking that yield is a bad word in bitcoin land, you’re not wrong, so stay with me. Perpetual motion machine crypto yield is fake and largely where many crypto scams originate. However, that doesn’t mean yield in the classic finance sense does not exist in bitcoin, it very literally does. Fortunately for us bitcoiners there are many other smart, driven, and enterprising bitcoiners that understand this problem and are doing something to address it. These individuals are pioneering new possibilities in bitcoin and finance, specifically when it comes to fixed income.
Here are some new developments –
Private Credit Funds – The Build Asset Management Secured Income Fund I is a private credit fund created by Build Asset Management. This fund primarily invests in bitcoin-backed, collateralized business loans originated by Unchained, with a secured structure involving a multi-signature, over-collateralized setup for risk management. Unchained originates loans and sells them to Build, which pools them into the fund, enabling investors to share in the interest income.
Dynamics
- Loan Terms: Unchained issues loans at interest rates around 14%, secured with a 2/3 multi-signature vault backed by a 40% loan-to-value (LTV) ratio.
- Fund Mechanics: Build buys these loans from Unchained, thus providing liquidity to Unchained for further loan originations, while Build manages interest payments to investors in the fund.
Pros
- The fund offers a unique way to earn income via bitcoin-collateralized debt, with protection against rehypothecation and strong security measures, making it attractive for investors seeking exposure to fixed income with bitcoin.
Cons
- The fund is only available to accredited investors, which is a regulatory standard for private credit funds like this.
Corporate Bonds – MicroStrategy Inc. (MSTR), a business intelligence company, has leveraged its corporate structure to issue bonds specifically to acquire bitcoin as a reserve asset. This approach allows investors to indirectly gain exposure to bitcoin’s potential upside while receiving interest payments on their bond investments. Some other publicly traded companies have also adopted this strategy, but for the sake of this article we will focus on MSTR as they are the biggest and most vocal issuer.
Dynamics
- Issuance: MicroStrategy has issued senior secured notes in multiple offerings, with terms allowing the company to use the proceeds to purchase bitcoin.
- Interest Rates: The bonds typically carry high-yield interest rates, averaging around 6-8% APR, depending on the specific issuance and market conditions at the time of issuance.
- Maturity: The bonds have varying maturities, with most structured for multi-year terms, offering investors medium-term exposure to bitcoin’s value trajectory through MicroStrategy’s holdings.
Pros
- Indirect Bitcoin exposure with income provides a unique opportunity for investors seeking income from bitcoin-backed debt.
- Bonds issued by MicroStrategy offer relatively high interest rates, appealing for fixed-income investors attracted to the higher risk/reward scenarios.
Cons
- There are credit risks tied to MicroStrategy’s financial health and bitcoin’s performance. A significant drop in bitcoin prices could strain the company’s ability to service debt, increasing credit risk.
- Availability: These bonds are primarily accessible to institutional investors and accredited investors, limiting availability for retail investors.
Interest Payable in Bitcoin – River has introduced an innovative product, bitcoin Interest on Cash, allowing clients to earn interest on their U.S. dollar deposits, with the interest paid in bitcoin.
Dynamics
- Interest Payment: Clients earn an annual interest rate of 3.8% on their cash deposits. The accrued interest is converted to Bitcoin daily and paid out monthly, enabling clients to accumulate Bitcoin over time.
- Security and Accessibility: Cash deposits are insured up to $250,000 through River’s banking partner, Lead Bank, a member of the FDIC. All Bitcoin holdings are maintained in full reserve custody, ensuring that client assets are not lent or leveraged.
Pros
- There are no hidden fees or minimum balance requirements, and clients can withdraw their cash at any time.
- The 3.8% interest rate provides a predictable income stream, akin to traditional fixed-income investments.
Cons
- While the interest rate is fixed, the value of the Bitcoin received as interest can fluctuate, introducing potential variability in the investment’s overall return.
- Interest rate payments are on the lower side
Admittedly, this is a very small list, however, these types of investments are growing more numerous and meaningful. The reality is the existing options aren’t numerous enough to service every client that has a need for fixed income exposure. I challenge advisors to explore innovative options for fixed income exposure outside of sovereign debt, as that is most certainly a road to nowhere. It is my wholehearted belief and call to action that we need more options to help clients across the risk and capital allocation spectrum access a sound money standard.
Additional Resources
- [River: The future of saving is here: Earn 3.8% on cash. Paid in Bitcoin.](http://bitcoin%20and%20fixed%20ihttps//blog.river.com/bitcoin-interest-on-cash/ncome)
- [Onramp: Bitcoin, The Emergent Asset Class](https://onrampbitcoin.docsend.com/view/j4wje7kgvw357tt9)
- [MicroStrategy: MicroStrategy Announces Pricing of Offering of Convertible Senior Notes](https://www.microstrategy.com/press/microstrategy-announces-pricing-of-offering-of-convertible-senior-notes_09-18-2024)
---
*Bitcoin and Fixed Income was Written By Wyatt O’Rourke. If you enjoyed this article then support his writing, directly, by donating to his lightning wallet: ultrahusky3@primal.net*
-
New content edited 2
-
<img src="https://blossom.primal.net/e306357a7e53c4e40458cf6fa5625917dc8deaa4d1012823caa5a0eefb39e53c.jpg">
\
\
It was another historic week for both bitcoin and the Ten31 portfolio, as the world’s oldest, largest, most battle-tested cryptocurrency climbed to new all-time highs each day to close out the week just shy of the $100,000 mark. Along the way, bitcoin continued to accumulate institutional and regulatory wins, including the much-anticipated approval and launch of spot bitcoin ETF options and the appointment of several additional pro-bitcoin Presidential cabinet officials. The timing for this momentum was poetic, as this week marked the second anniversary of the pico-bottom of the 2022 bear market, a level that bitcoin has now hurdled to the tune of more than 6x despite the litany of bitcoin obituaries published at the time. The entirety of 2024 and especially the past month have further cemented our view that bitcoin is rapidly gaining a sense of legitimacy among institutions, fiduciaries, and governments, and we remain optimistic that this trend is set to accelerate even more into 2025.
Several Ten31 portfolio companies made exciting announcements this week that should serve to further entrench bitcoin’s institutional adoption. AnchorWatch, a first of its kind bitcoin insurance provider offering 1:1 coverage with its innovative use of bitcoin’s native properties, announced it has been designated a Lloyd’s of London Coverholder, giving the company unique, blue-chip status as it begins to write bitcoin insurance policies of up to $100 million per policy starting next month. Meanwhile, Battery Finance Founder and CEO Andrew Hohns appeared on CNBC to delve into the launch of Battery’s pioneering private credit strategy which fuses bitcoin and conventional tangible assets in a dual-collateralized structure that offers a compelling risk/return profile to both lenders and borrowers. Both companies are clearing a path for substantially greater bitcoin adoption in massive, untapped pools of capital, and Ten31 is proud to have served as lead investor for AnchorWatch’s Seed round and as exclusive capital partner for Battery.
As the world’s largest investor focused entirely on bitcoin, Ten31 has deployed nearly $150 million across two funds into more than 30 of the most promising and innovative companies in the ecosystem like AnchorWatch and Battery, and we expect 2025 to be the best year yet for both bitcoin and our portfolio. Ten31 will hold a first close for its third fund at the end of this year, and investors in that close will benefit from attractive incentives and a strong initial portfolio. Visit ten31.vc/funds to learn more and get in touch to discuss participating.\
\
**Portfolio Company Spotlight**
[Primal](http://primal.net/) is a first of its kind application for the Nostr protocol that combines a client, caching service, analytics tools, and more to address several unmet needs in the nascent Nostr ecosystem. Through the combination of its sleek client application and its caching service (built on a completely open source stack), Primal seeks to offer an end-user experience as smooth and easy as that of legacy social media platforms like Twitter and eventually many other applications, unlocking the vast potential of Nostr for the next billion people. Primal also offers an integrated wallet (powered by [Strike BLACK](https://x.com/Strike/status/1755335823023558819)) that substantially reduces onboarding and UX frictions for both Nostr and the lightning network while highlighting bitcoin’s unique power as internet-native, open-source money.
### **Selected Portfolio News**
AnchorWatch announced it has achieved Llody’s Coverholder status, allowing the company to provide unique 1:1 bitcoin insurance offerings starting in [December](https://x.com/AnchorWatch/status/1858622945763131577).\
\
Battery Finance Founder and CEO Andrew Hohns appeared on CNBC to delve into the company’s unique bitcoin-backed [private credit strategy](https://www.youtube.com/watch?v=26bOawTzT5U).
Primal launched version 2.0, a landmark update that adds a feed marketplace, robust advanced search capabilities, premium-tier offerings, and many [more new features](https://primal.net/e/note1kaeajwh275kdwd6s0c2ksvj9f83t0k7usf9qj8fha2ac7m456juqpac43m).
Debifi launched its new iOS app for Apple users seeking non-custodial [bitcoin-collateralized loans](https://x.com/debificom/status/1858897785044500642).
### **Media**
Strike Founder and CEO Jack Mallers [joined Bloomberg TV](https://www.youtube.com/watch?v=i4z-2v_0H1k) to discuss the strong volumes the company has seen over the past year and the potential for a US bitcoin strategic reserve.
Primal Founder and CEO Miljan Braticevic [joined](https://www.youtube.com/watch?v=kqR_IQfKic8) The Bitcoin Podcast to discuss the rollout of Primal 2.0 and the future of Nostr.
Ten31 Managing Partner Marty Bent [appeared on](https://www.youtube.com/watch?v=_WwZDEtVxOE&t=1556s) BlazeTV to discuss recent changes in the regulatory environment for bitcoin.
Zaprite published a customer [testimonial video](https://x.com/ZapriteApp/status/1859357150809587928) highlighting the popularity of its offerings across the bitcoin ecosystem.
### **Market Updates**
Continuing its recent momentum, bitcoin reached another new all-time high this week, clocking in just below $100,000 on Friday. Bitcoin has now reached a market cap of [nearly $2 trillion](https://companiesmarketcap.com/assets-by-market-cap/), putting it within 3% of the market caps of Amazon and Google.
After receiving SEC and CFTC approval over the past month, long-awaited options on spot bitcoin ETFs were fully [approved](https://finance.yahoo.com/news/bitcoin-etf-options-set-hit-082230483.html) and launched this week. These options should help further expand bitcoin’s institutional [liquidity profile](https://x.com/kellyjgreer/status/1824168136637288912), with potentially significant [implications](https://x.com/dgt10011/status/1837278352823972147) for price action over time.
The new derivatives showed strong performance out of the gate, with volumes on options for BlackRock’s IBIT reaching [nearly $2 billion](https://www.coindesk.com/markets/2024/11/20/bitcoin-etf-options-introduction-marks-milestone-despite-position-limits/) on just the first day of trading despite [surprisingly tight](https://x.com/dgt10011/status/1858729192105414837) position limits for the vehicles.
Meanwhile, the underlying spot bitcoin ETF complex had yet another banner week, pulling in [$3.4 billion](https://farside.co.uk/btc/) in net inflows.
New reports [suggested](https://archive.is/LMr4o) President-elect Donald Trump’s social media company is in advanced talks to acquire crypto trading platform Bakkt, potentially the latest indication of the incoming administration’s stance toward the broader “crypto” ecosystem.
On the macro front, US housing starts [declined M/M again](https://finance.yahoo.com/news/us-single-family-housing-starts-134759234.html) in October on persistently high mortgage rates and weather impacts. The metric remains well below pre-COVID levels.
Pockets of the US commercial real estate market remain challenged, as the CEO of large Florida developer Related indicated that [developers need further rate cuts](https://www.bloomberg.com/news/articles/2024-11-19/miami-developer-says-real-estate-market-needs-rate-cuts-badly) “badly” to maintain project viability.
US Manufacturing PMI [increased slightly](https://www.fxstreet.com/news/sp-global-pmis-set-to-signal-us-economy-continued-to-expand-in-november-202411220900) M/M, but has now been in contraction territory (<50) for well over two years.
The latest iteration of the University of Michigan’s popular consumer sentiment survey [ticked up](https://archive.is/fY5j6) following this month’s election results, though so did five-year inflation expectations, which now sit comfortably north of 3%.
### **Regulatory Update**
After weeks of speculation, the incoming Trump administration appointed hedge fund manager [Scott Bessent](https://www.cnbc.com/amp/2024/11/22/donald-trump-chooses-hedge-fund-executive-scott-bessent-for-treasury-secretary.html) to head up the US Treasury. Like many of Trump’s cabinet selections so far, Bessent has been a [public advocate](https://x.com/EleanorTerrett/status/1856204133901963512) for bitcoin.
Trump also [appointed](https://www.axios.com/2024/11/19/trump-commerce-secretary-howard-lutnick) Cantor Fitzgerald CEO Howard Lutnick – another outspoken [bitcoin bull](https://www.coindesk.com/policy/2024/09/04/tradfi-companies-want-to-transact-in-bitcoin-says-cantor-fitzgerald-ceo/) – as Secretary of the Commerce Department.
Meanwhile, the Trump team is reportedly considering creating a new [“crypto czar”](https://archive.is/jPQHF) role to sit within the administration. While it’s unclear at this point what that role would entail, one report indicated that the administration’s broader “crypto council” is expected to move forward with plans for a [strategic bitcoin reserve](https://archive.is/ZtiOk).
Various government lawyers suggested this week that the Trump administration is likely to be [less aggressive](https://archive.is/Uggnn) in seeking adversarial enforcement actions against bitcoin and “crypto” in general, as regulatory bodies appear poised to shift resources and focus elsewhere.
Other updates from the regulatory apparatus were also directionally positive for bitcoin, most notably FDIC Chairman Martin Gruenberg’s confirmation that he [plans to resign](https://www.politico.com/news/2024/11/19/fdics-gruenberg-says-he-will-resign-jan-19-00190373) from his post at the end of President Biden’s term.
Many critics have alleged Gruenberg was an architect of [“Operation Chokepoint 2.0,”](https://x.com/GOPMajorityWhip/status/1858927571666096628) which has created banking headwinds for bitcoin companies over the past several years, so a change of leadership at the department is likely yet another positive for the space.
SEC Chairman Gary Gensler also officially announced he plans to resign at the start of the new administration. Gensler has been the target of much ire from the broader “crypto” space, though we expect many projects outside bitcoin may continue to struggle with questions around the [Howey Test](https://www.investopedia.com/terms/h/howey-test.asp).
Overseas, a Chinese court ruled that it is [not illegal](https://www.benzinga.com/24/11/42103633/chinese-court-affirms-cryptocurrency-ownership-as-legal-as-bitcoin-breaks-97k) for individuals to hold cryptocurrency, even though the country is still ostensibly [enforcing a ban](https://www.bbc.com/news/technology-58678907) on crypto transactions.
### **Noteworthy**
The incoming CEO of Charles Schwab – which administers over $9 trillion in client assets – [suggested](https://x.com/matthew_sigel/status/1859700668887597331) the platform is preparing to “get into” spot bitcoin offerings and that he “feels silly” for having waited this long. As this attitude becomes more common among traditional finance players, we continue to believe that the number of acquirers coming to market for bitcoin infrastructure capabilities will far outstrip the number of available high quality assets.
BlackRock’s 2025 Thematic Outlook notes a [“renewed sense of optimism”](https://www.ishares.com/us/insights/2025-thematic-outlook#rate-cuts) on bitcoin among the asset manager’s client base due to macro tailwinds and the improving regulatory environment. Elsewhere, BlackRock’s head of digital assets [indicated](https://www.youtube.com/watch?v=TE7cAw7oIeA) the firm does not view bitcoin as a “risk-on” asset.
MicroStrategy, which was a sub-$1 billion market cap company less than five years ago, briefly breached a [$100 billion equity value](https://finance.yahoo.com/news/microstrategy-breaks-top-100-u-191842879.html) this week as it continues to aggressively acquire bitcoin. The company now holds nearly 350,000 bitcoin on its balance sheet.
Notably, Allianz SE, Germany’s largest insurer, [spoke for 25%](https://bitbo.io/news/allianz-buys-microstrategy-notes/) of MicroStrategy’s latest $3 billion convertible note offering this week, suggesting [growing appetite](https://x.com/Rob1Ham/status/1860053859181199649) for bitcoin proxy exposure among more restricted pools of capital.
The [ongoing meltdown](https://www.cnbc.com/2024/11/22/synapse-bankruptcy-thousands-of-americans-see-their-savings-vanish.html) of fintech middleware provider Synapse has left tens of thousands of customers with nearly 100% deposit haircuts as hundreds of millions in funds remain missing, the latest unfortunate case study in the fragility of much of the US’s legacy banking stack.
### **Travel**
- [BitcoinMENA](https://bitcoin2024.b.tc/mena), Dec 9-10
- [Nashville BitDevs](https://www.meetup.com/bitcoinpark/events/302533726/?eventOrigin=group_upcoming_events), Dec 10
- [Austin BitDevs](https://www.meetup.com/austin-bitcoin-developers/events/303476169/?eventOrigin=group_upcoming_events), Dec 19
- [Nashville Energy and Mining Summit](https://www.meetup.com/bitcoinpark/events/304092624/?eventOrigin=group_events_list), Jan 30
-
Original: https://techreport.com/crypto-news/brazil-central-bank-ban-monero-stablecoins/
Brazilian’s Central Bank Will Ban Monero and Algorithmic Stablecoins in the Country
===================================================================================
Brazil proposes crypto regulations banning Monero and algorithmic stablecoins and enforcing strict compliance for exchanges.
* * *
**KEY TAKEAWAYS**
* The Central Bank of Brazil has proposed **regulations prohibiting privacy-centric cryptocurrencies** like Monero.
* The regulations **categorize exchanges into intermediaries, custodians, and brokers**, each with specific capital requirements and compliance standards.
* While the proposed rules apply to cryptocurrencies, certain digital assets like non-fungible tokens **(NFTs) are still ‘deregulated’ in Brazil**.
![Brazilian´s Central Bank will ban Monero and algorithmic stablecoins in the country](https://techreport.com/wp-content/uploads/2024/11/brazil-central-bank-ban-monero-stablecoins.jpg)
In a Notice of Participation announcement, the Brazilian Central Bank (BCB) outlines **regulations for virtual asset service providers (VASPs)** operating in the country.
**_In the document, the Brazilian regulator specifies that privacy-focused coins, such as Monero, must be excluded from all digital asset companies that intend to operate in Brazil._**
Let’s unpack what effect these regulations will have.
Brazil’s Crackdown on Crypto Fraud
----------------------------------
If the BCB’s current rule is approved, **exchanges dealing with coins that provide anonymity must delist these currencies** or prevent Brazilians from accessing and operating these assets.
The Central Bank argues that currencies like Monero make it difficult and even prevent the identification of users, thus creating problems in complying with international AML obligations and policies to prevent the financing of terrorism.
According to the Central Bank of Brazil, the bans aim to **prevent criminals from using digital assets to launder money**. In Brazil, organized criminal syndicates such as the Primeiro Comando da Capital (PCC) and Comando Vermelho have been increasingly using digital assets for money laundering and foreign remittances.
> … restriction on the supply of virtual assets that contain characteristics of fragility, insecurity or risks that favor fraud or crime, such as virtual assets designed to favor money laundering and terrorist financing practices by facilitating anonymity or difficulty identification of the holder.
>
> – [Notice of Participation](https://www.gov.br/participamaisbrasil/edital-de-participacao-social-n-109-2024-proposta-de-regulamentacao-do-)
The Central Bank has identified that **removing algorithmic stablecoins is essential to guarantee the safety of users’ funds** and avoid events such as when Terraform Labs’ entire ecosystem collapsed, losing billions of investors’ dollars.
The Central Bank also wants to **control all digital assets traded by companies in Brazil**. According to the current proposal, the [national regulator](https://techreport.com/cryptocurrency/learning/crypto-regulations-global-view/) will have the **power to ask platforms to remove certain listed assets** if it considers that they do not meet local regulations.
However, the regulations will not include [NFTs](https://techreport.com/statistics/crypto/nft-awareness-adoption-statistics/), real-world asset (RWA) tokens, RWA tokens classified as securities, and tokenized movable or real estate assets. These assets are still ‘deregulated’ in Brazil.
Monero: What Is It and Why Is Brazil Banning It?
------------------------------------------------
Monero ($XMR) is a cryptocurrency that uses a protocol called CryptoNote. It launched in 2013 and ‘erases’ transaction data, preventing the sender and recipient addresses from being publicly known. The Monero network is based on a proof-of-work (PoW) consensus mechanism, which incentivizes miners to add blocks to the blockchain.
Like Brazil, **other nations are banning Monero** in search of regulatory compliance. Recently, Dubai’s new digital asset rules prohibited the issuance of activities related to anonymity-enhancing cryptocurrencies such as $XMR.
Furthermore, exchanges such as **Binance have already announced they will delist Monero** on their global platforms due to its anonymity features. Kraken did the same, removing Monero for their European-based users to comply with [MiCA regulations](https://techreport.com/crypto-news/eu-mica-rules-existential-threat-or-crypto-clarity/).
Data from Chainalysis shows that Brazil is the **seventh-largest Bitcoin market in the world**.
![Brazil is the 7th largest Bitcoin market in the worlk](https://techreport.com/wp-content/uploads/2024/11/Screenshot-2024-11-19-171029.png)
In Latin America, **Brazil is the largest market for digital assets**. Globally, it leads in the innovation of RWA tokens, with several companies already trading this type of asset.
In Closing
----------
Following other nations, Brazil’s regulatory proposals aim to combat illicit activities such as money laundering and terrorism financing.
Will the BCB’s move safeguard people’s digital assets while also stimulating growth and innovation in the crypto ecosystem? Only time will tell.
References
----------
Cassio Gusson is a journalist passionate about technology, cryptocurrencies, and the nuances of human nature. With a career spanning roles as Senior Crypto Journalist at CriptoFacil and Head of News at CoinTelegraph, he offers exclusive insights on South America’s crypto landscape. A graduate in Communication from Faccamp and a post-graduate in Globalization and Culture from FESPSP, Cassio explores the intersection of governance, decentralization, and the evolution of global systems.
[View all articles by Cassio Gusson](https://techreport.com/author/cassiog/)
-
Now test old reliable front end
Stay tuned more later
Keeping this as template long note for debugging in future as come across few NIP-33 post edit issues
-
## Chef's notes
This simple, easy, no bake desert will surely be the it at you next family gathering. You can keep it a secret or share it with the crowd that this is a healthy alternative to normal pie. I think everyone will be amazed at how good it really is.
## Details
- ⏲️ Prep time: 30
- 🍳 Cook time: 0
- 🍽️ Servings: 8
## Ingredients
- 1/3 cup of Heavy Cream- 0g sugar, 5.5g carbohydrates
- 3/4 cup of Half and Half- 6g sugar, 3g carbohydrates
- 4oz Sugar Free Cool Whip (1/2 small container) - 0g sugar, 37.5g carbohydrates
- 1.5oz box (small box) of Sugar Free Instant Chocolate Pudding- 0g sugar, 32g carbohydrates
- 1 Pecan Pie Crust- 24g sugar, 72g carbohydrates
## Directions
1. The total pie has 30g of sugar and 149.50g of carboydrates. So if you cut the pie into 8 equal slices, that would come to 3.75g of sugar and 18.69g carbohydrates per slice. If you decided to not eat the crust, your sugar intake would be .75 gram per slice and the carborytrates would be 9.69g per slice. Based on your objective, you could use only heavy whipping cream and no half and half to further reduce your sugar intake.
2. Mix all wet ingredients and the instant pudding until thoroughly mixed and a consistent color has been achieved. The heavy whipping cream causes the mixture to thicken the more you mix it. So, I’d recommend using an electric mixer. Once you are satisfied with the color, start mixing in the whipping cream until it has a consistent “chocolate” color thorough. Once your satisfied with the color, spoon the mixture into the pie crust, smooth the top to your liking, and then refrigerate for one hour before serving.
-
Let me tell you a beautiful story. Last night, during the speakers' dinner at Monerotopia, the waitress was collecting tiny tips in Mexican pesos. I asked her, "Do you really want to earn tips seriously?" I then showed her how to set up a Cake Wallet, and she started collecting tips in Monero, reaching 0.9 XMR. Of course, she wanted to cash out to fiat immediately, but it solved a real problem for her: making more money. That amount was something she would never have earned in a single workday. We kept talking, and I promised to give her Zoom workshops. What can I say? I love people, and that's why I'm a natural orange-piller.
-
Weekends are the perfect time to unwind, explore, or spend time doing what we love. How would you spend your ideal weekend? Would it be all about relaxation, or would you be out and about?
For me, an ideal weekend would start with a slow Saturday morning, a good book and coffee. Then I would spend the afternoon exploring local trails and looking for snacks. Then always a slow Sunday night hopefully.
originally posted at https://stacker.news/items/760492
-
## You have no idea
I regularly read comments from people, on here, wondering how it's possible to marry -- or even simply be friends! -- with someone who doesn't agree with you on politics. I see this sentiment expressed quite often, usually in the context of Bitcoin, or whatever _pig is currently being chased through the village_, as they say around here.
![pig racing](https://i.pinimg.com/564x/a2/d5/8a/a2d58ac249846854345f727e41984e6c.jpg)
It seems rather sensible, but I don't think it's as hard, as people make it out to be. Further, I think it's a dangerous precondition to set, for your interpersonal relationships, because the political field is constantly in flux. If you determine who you will love, by their opinions, do you stop loving them if their opinions change, or if the opinions they have become irrelevant and a new set of opinions are needed -- and their new ones don't match your new ones? We could see this happen to relationships en masse, during the Covid Era, and I think it happens every day, in a slow grind toward the disintegration of interpersonal discourse.
I suspect many people do stop loving, at that point, as they never really loved the other person for their own sake, they loved the other person because they thought the other person was exactly like they are. But no two people are alike, and the longer you are in a relationship with someone else, the more the initial giddiness wears off and the trials and tribulations add up, the more you notice how very different you actually are. This is the point, where best friends and romantic couples say, _We just grew apart._
But you were always apart. You were always two different people. You just didn't notice, until now.
![separation](https://i.pinimg.com/564x/c3/05/a6/c305a6a95e809b0356ecb651c72f78b9.jpg)
I've also always been surprised at how many same-party relationships disintegrate because of some disagreement over some particular detail of some particular topic, that they generally agree on. To me, it seems like an irrelevant side-topic, but _they can't stand to be with this person_... and they stomp off. So, I tend to think that it's less that opinions need to align to each other, but rather than opinions need to align in accordance with the level of interpersonal tolerance they can bring into the relationship.
## I was raised by relaxed revolutionaries
Maybe I see things this way because my parents come from two diverging political, cultural, national, and ethnic backgrounds, and are prone to disagreeing about a lot of "important" (to people outside their marriage) things, but still have one of the healthiest, most-fruitful, and most long-running marriages of anyone I know, from that generation. My parents, you see, aren't united by their opinions. They're united by their relationship, which is something _outside_ of opinions. Beyond opinions. Relationships are what turn two different people into one, cohesive unit, so that they slowly grow together. Eventually, even their faces merge, and their biological clocks tick to the same rhythm. They eventually become one entity that contains differing opinions about the same topics.
It's like magic, but it's the result of a mindset, not a worldview.
Or, as I like to quip:
> The best way to stay married, is to not get divorced.
![elderly couple](https://i.pinimg.com/564x/f7/0f/d2/f70fd2963312236c60cac61ec2324ce8.jpg)
My parents simply determined early on, that they would stay together, and whenever they would find that they disagreed on something that _didn't directly pertain to their day-to-day existence with each other_ they would just agree-to-disagree about that, or roll their eyes, and move on. You do you. Live and let live.
My parents have some of the most strongly held personal opinions of any people I've ever met, but they're also incredibly tolerant and can get along with nearly anyone, so their friends are a confusing hodgepodge of _people we liked and found interesting enough to keep around_. Which makes their house parties really fun, and highly unusual, in this day and age of mutual-damnation across the aisle.
![party time](https://i.pinimg.com/564x/4e/aa/2b/4eaa2bb199aa7e5f36a0dbc2f0e4f217.jpg)
The things that did affect them, directly, like which school the children should attend or which country they should live in, etc. were things they'd sit down and discuss, and somehow one opinion would emerge, and they'd again... move on.
And that's how my husband and I also live our lives, and it's been working surprisingly well. No topics are off-limits to discussion (so long as you don't drone on for too long), nobody has to give up deeply held beliefs, or stop agitating for the political decisions they prefer.
You see, we didn't like that the other always had the same opinion. We liked that the other always held their opinions strongly. That they were passionate about their opinions. That they were willing to voice their opinions; sacrifice to promote their opinions. And that they didn't let anyone browbeat or cow them, for their opinions, not even their best friends or their spouse. But that they were open to listening to the other side, and trying to wrap their mind around the possibility that they _might just be wrong about something_.
![listening](https://i.pinimg.com/564x/69/ec/1b/69ec1b66fc58802de4d04bfb5f0f8dc6.jpg)
We married each other because we knew: this person really cares, this person has thought this through, and they're in it, to win it. What "it" is, is mostly irrelevant, so long as it doesn't entail torturing small animals in the basement, or raising the children on a diet of Mountain Dew and porn, or something.
Live and let live. At least, it's never boring. At least, there's always something to ~~argue~~ talk about. At least, we never think... we've just grown apart.
-
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The [reporting](https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/) purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
> In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
>
> The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the Star Trek episode “A Taste of Armageddon“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over one million open-source LLMs available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are potentially trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American infrastructure, data, and yes, your credit history?
As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much many times.
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
Open-source matters When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various licensing schemes – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also open-source, which has allowed thousands of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
Open-source is for your friends, and enemies In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to Hugging Face, download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put export controls on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see The Crypto Wars). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
Limiting open-source threatens our own advancement If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “Hundred Year Marathon” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with trillions of dollars’ worth of investments that span far beyond AI chatbots and skip logic protocols.
The theft of intellectual property at factories in Shenzhen, or in US courts by third-party litigation funding coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
Last week, an investigation by Reuters revealed that Chinese researchers have been using open-source AI tools to build nefarious-sounding models that may have some military application.
The reporting purports that adversaries in the Chinese Communist Party and its military wing are taking advantage of the liberal software licensing of American innovations in the AI space, which could someday have capabilities to presumably harm the United States.
> In a June paper reviewed by[ Reuters](https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/), six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
>
> The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
While I’m doubtful that today’s existing chatbot-like tools will be the ultimate battlefield for a new geopolitical war (queue up the computer-simulated war from the *Star Trek* episode “[A Taste of Armageddon](https://en.wikipedia.org/wiki/A_Taste_of_Armageddon)“), this recent exposé requires us to revisit why large language models are released as open-source code in the first place.
Added to that, should it matter that an adversary is having a poke around and may ultimately use them for some purpose we may not like, whether that be China, Russia, North Korea, or Iran?
The number of open-source AI LLMs continues to grow each day, with projects like Vicuna, LLaMA, BLOOMB, Falcon, and Mistral available for download. In fact, there are over [one million open-source LLMs](https://huggingface.co/models) available as of writing this post. With some decent hardware, every global citizen can download these codebases and run them on their computer.
With regard to this specific story, we could assume it to be a selective leak by a competitor of Meta which created the LLaMA model, intended to harm its reputation among those with cybersecurity and national security credentials. There are [potentially](https://bigthink.com/business/the-trillion-dollar-ai-race-to-create-digital-god/) trillions of dollars on the line.
Or it could be the revelation of something more sinister happening in the military-sponsored labs of Chinese hackers who have already been caught attacking American[ infrastructure](https://www.nbcnews.com/tech/security/chinese-hackers-cisa-cyber-5-years-us-infrastructure-attack-rcna137706),[ data](https://www.cnn.com/2024/10/05/politics/chinese-hackers-us-telecoms/index.html), and yes, [your credit history](https://thespectator.com/topic/chinese-communist-party-credit-history-equifax/)?
**As consumer advocates who believe in the necessity of liberal democracies to safeguard our liberties against authoritarianism, we should absolutely remain skeptical when it comes to the communist regime in Beijing. We’ve written as much[ many times](https://consumerchoicecenter.org/made-in-china-sold-in-china/).**
At the same time, however, we should not subrogate our own critical thinking and principles because it suits a convenient narrative.
Consumers of all stripes deserve technological freedom, and innovators should be free to provide that to us. And open-source software has provided the very foundations for all of this.
## **Open-source matters**
When we discuss open-source software and code, what we’re really talking about is the ability for people other than the creators to use it.
The various [licensing schemes](https://opensource.org/licenses) – ranging from GNU General Public License (GPL) to the MIT License and various public domain classifications – determine whether other people can use the code, edit it to their liking, and run it on their machine. Some licenses even allow you to monetize the modifications you’ve made.
While many different types of software will be fully licensed and made proprietary, restricting or even penalizing those who attempt to use it on their own, many developers have created software intended to be released to the public. This allows multiple contributors to add to the codebase and to make changes to improve it for public benefit.
Open-source software matters because anyone, anywhere can download and run the code on their own. They can also modify it, edit it, and tailor it to their specific need. The code is intended to be shared and built upon not because of some altruistic belief, but rather to make it accessible for everyone and create a broad base. This is how we create standards for technologies that provide the ground floor for further tinkering to deliver value to consumers.
Open-source libraries create the building blocks that decrease the hassle and cost of building a new web platform, smartphone, or even a computer language. They distribute common code that can be built upon, assuring interoperability and setting standards for all of our devices and technologies to talk to each other.
I am myself a proponent of open-source software. The server I run in my home has dozens of dockerized applications sourced directly from open-source contributors on GitHub and DockerHub. When there are versions or adaptations that I don’t like, I can pick and choose which I prefer. I can even make comments or add edits if I’ve found a better way for them to run.
Whether you know it or not, many of you run the Linux operating system as the base for your Macbook or any other computer and use all kinds of web tools that have active repositories forked or modified by open-source contributors online. This code is auditable by everyone and can be scrutinized or reviewed by whoever wants to (even AI bots).
This is the same software that runs your airlines, powers the farms that deliver your food, and supports the entire global monetary system. The code of the first decentralized cryptocurrency Bitcoin is also [open-source](https://github.com/bitcoin), which has allowed [thousands](https://bitcoinmagazine.com/business/bitcoin-is-money-for-enemies) of copycat protocols that have revolutionized how we view money.
You know what else is open-source and available for everyone to use, modify, and build upon?
PHP, Mozilla Firefox, LibreOffice, MySQL, Python, Git, Docker, and WordPress. All protocols and languages that power the web. Friend or foe alike, anyone can download these pieces of software and run them how they see fit.
Open-source code is speech, and it is knowledge.
We build upon it to make information and technology accessible. Attempts to curb open-source, therefore, amount to restricting speech and knowledge.
## **Open-source is for your friends, and enemies**
In the context of Artificial Intelligence, many different developers and companies have chosen to take their large language models and make them available via an open-source license.
At this very moment, you can click on over to[ Hugging Face](https://huggingface.co/), download an AI model, and build a chatbot or scripting machine suited to your needs. All for free (as long as you have the power and bandwidth).
Thousands of companies in the AI sector are doing this at this very moment, discovering ways of building on top of open-source models to develop new apps, tools, and services to offer to companies and individuals. It’s how many different applications are coming to life and thousands more jobs are being created.
We know this can be useful to friends, but what about enemies?
As the AI wars heat up between liberal democracies like the US, the UK, and (sluggishly) the European Union, we know that authoritarian adversaries like the CCP and Russia are building their own applications.
The fear that China will use open-source US models to create some kind of military application is a clear and present danger for many political and national security researchers, as well as politicians.
A bipartisan group of US House lawmakers want to put [export controls](https://www.reuters.com/technology/us-lawmakers-unveil-bill-make-it-easier-restrict-exports-ai-models-2024-05-10/) on AI models, as well as block foreign access to US cloud servers that may be hosting AI software.
If this seems familiar, we should also remember that the US government once classified cryptography and encryption as “munitions” that could not be exported to other countries (see[ The Crypto Wars](https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States)). Many of the arguments we hear today were invoked by some of the same people as back then.
Now, encryption protocols are the gold standard for many different banking and web services, messaging, and all kinds of electronic communication. We expect our friends to use it, and our foes as well. Because code is knowledge and speech, we know how to evaluate it and respond if we need to.
Regardless of who uses open-source AI, this is how we should view it today. These are merely tools that people will use for good or ill. It’s up to governments to determine how best to stop illiberal or nefarious uses that harm us, rather than try to outlaw or restrict building of free and open software in the first place.
## **Limiting open-source threatens our own advancement**
If we set out to restrict and limit our ability to create and share open-source code, no matter who uses it, that would be tantamount to imposing censorship. There must be another way.
If there is a “[Hundred Year Marathon](https://www.amazon.com/Hundred-Year-Marathon-Strategy-Replace-Superpower/dp/1250081343)” between the United States and liberal democracies on one side and autocracies like the Chinese Communist Party on the other, this is not something that will be won or lost based on software licenses. We need as much competition as possible.
The Chinese military has been building up its capabilities with [trillions of dollars’](https://www.economist.com/china/2024/11/04/in-some-areas-of-military-strength-china-has-surpassed-america) worth of investments that span far beyond AI chatbots and skip logic protocols.
The [theft](https://www.technologyreview.com/2023/06/20/1075088/chinese-amazon-seller-counterfeit-lawsuit/) of intellectual property at factories in Shenzhen, or in US courts by [third-party litigation funding](https://nationalinterest.org/blog/techland/litigation-finance-exposes-our-judicial-system-foreign-exploitation-210207) coming from China, is very real and will have serious economic consequences. It may even change the balance of power if our economies and countries turn to war footing.
But these are separate issues from the ability of free people to create and share open-source code which we can all benefit from. In fact, if we want to continue our way our life and continue to add to global productivity and growth, it’s demanded that we defend open-source.
If liberal democracies want to compete with our global adversaries, it will not be done by reducing the freedoms of citizens in our own countries.
*Originally published on the website of the [Consumer Choice Center](https://consumerchoicecenter.org/open-source-is-for-everyone-even-your-adversaries/).*
-
> ### 第三方API合集:
---
免责申明:
在此推荐的 OpenAI API Key 由第三方代理商提供,所以我们不对 API Key 的 有效性 和 安全性 负责,请你自行承担购买和使用 API Key 的风险。
| 服务商 | 特性说明 | Proxy 代理地址 | 链接 |
| --- | --- | --- | --- |
| AiHubMix | 使用 OpenAI 企业接口,全站模型价格为官方 86 折(含 GPT-4 )| https://aihubmix.com/v1 | [官网](https://aihubmix.com?aff=mPS7) |
| OpenAI-HK | OpenAI的API官方计费模式为,按每次API请求内容和返回内容tokens长度来定价。每个模型具有不同的计价方式,以每1,000个tokens消耗为单位定价。其中1,000个tokens约为750个英文单词(约400汉字)| https://api.openai-hk.com/ | [官网](https://openai-hk.com/?i=45878) |
| CloseAI | CloseAI是国内规模最大的商用级OpenAI代理平台,也是国内第一家专业OpenAI中转服务,定位于企业级商用需求,面向企业客户的线上服务提供高质量稳定的官方OpenAI API 中转代理,是百余家企业和多家科研机构的专用合作平台。 | https://api.openai-proxy.org | [官网](https://www.closeai-asia.com/) |
| OpenAI-SB | 需要配合Telegram 获取api key | https://api.openai-sb.com | [官网](https://www.openai-sb.com/) |
` 持续更新。。。`
---
### 推广:
访问不了openai,去`低调云`购买VPN。
官网:https://didiaocloud.xyz
邀请码:`w9AjVJit`
价格低至1元。
-
> I believe that five years from now, access to artificial intelligence will be akin to what access to the Internet represents today. It will be the greatest differentiator between the haves and have nots. Unequal access to artificial intelligence will exacerbate societal inequalities and limit opportunities for those without access to it.
Back in April, the AI Index Steering Committee at the Institute for Human-Centered AI from Stanford University released [The AI Index 2024 Annual Report](https://aiindex.stanford.edu/report/).
Out of the extensive report (502 pages), I chose to focus on the chapter dedicated to Public Opinion. People involved with AI live in a bubble. We all know and understand AI and therefore assume that everyone else does. But, is that really the case once you step out of your regular circles in Seattle or Silicon Valley and hit Main Street?
# Two thirds of global respondents have a good understanding of what AI is
The exact number is 67%. My gut feeling is that this number is way too high to be realistic. At the same time, 63% of respondents are aware of ChatGPT so maybe people are confounding AI with ChatGPT?
If so, there is so much more that they won't see coming.
This number is important because you need to see every other questions and response of the survey through the lens of a respondent who believes to have a good understanding of what AI is.
# A majority are nervous about AI products and services
52% of global respondents are nervous about products and services that use AI. Leading the pack are Australians at 69% and the least worried are Japanise at 23%. U.S.A. is up there at the top at 63%.
Japan is truly an outlier, with most countries moving between 40% and 60%.
# Personal data is the clear victim
Exaclty half of the respondents believe that AI companies will protect their personal data. And the other half believes they won't.
# Expected benefits
Again a majority of people (57%) think that it will change how they do their jobs. As for impact on your life, top hitters are getting things done faster (54%) and more entertainment options (51%).
The last one is a head scratcher for me. Are people looking forward to AI generated movies?
![image](https://i.nostr.build/GUh5M4GXumaJVGZA.jpg)
# Concerns
Remember the 57% that thought that AI will change how they do their jobs? Well, it looks like 37% of them expect to lose it. Whether or not this is what will happen, that is a very high number of people who have a direct incentive to oppose AI.
Other key concerns include:
- Misuse for nefarious purposes: 49%
- Violation of citizens' privacy: 45%
# Conclusion
This is the first time I come across this report and I wil make sure to follow future annual reports to see how these trends evolve.
**Overall, people are worried about AI. There are many things that could go wrong and people perceive that both jobs and privacy are on the line.**
---
Full citation: *Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.*
The AI Index 2024 Annual Report by Stanford University is licensed under [Attribution-NoDerivatives 4.0 International](https://creativecommons.org/licenses/by-nd/4.0/?ref=chooser-v1).
-
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148821549-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010191703)
**สวัสดีทุกคนบน Nostr ครับ** รวมไปถึง **watchers**และ **ผู้ติดตาม**ของผมจาก Deviantart และ platform งานศิลปะอื่นๆนะครับ
ตั้งแต่ต้นปี 2024 ผมใช้ AI เจนรูปงานตัวละครสาวๆจากอนิเมะ และเปิด exclusive content ให้สำหรับผู้ที่ชื่นชอบผลงานของผมเป็นพิเศษ
ผมโพสผลงานผมทั้งหมดไว้ที่เวบ Deviantart และค่อยๆสร้างฐานผู้ติดตามมาเรื่อยๆอย่างค่อยเป็นค่อยไปมาตลอดครับ ทุกอย่างเติบโตไปเรื่อยๆของมัน ส่วนตัวผมมองว่ามันเป็นพิร์ตธุรกิจออนไลน์ ของผมพอร์ตนึงได้เลย
**เมื่อวันที่ 16 กย.2024** มีผู้ติดตามคนหนึ่งส่งข้อความส่วนตัวมาหาผม บอกว่าชื่นชอบผลงานของผมมาก ต้องการจะขอซื้อผลงาน แต่ขอซื้อเป็น NFT นะ เสนอราคาซื้อขายต่อชิ้นที่สูงมาก หลังจากนั้นผมกับผู้ซื้อคนนี้พูดคุยกันในเมล์ครับ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148088676-YAKIHONNES3.PNG)
### นี่คือข้อสรุปสั่นๆจากการต่อรองซื้อขายครับ
(หลังจากนี้ผมขอเรียกผู้ซื้อว่า scammer นะครับ เพราะไพ่มันหงายมาแล้ว ว่าเขาคือมิจฉาชีพ)
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148348755-YAKIHONNES3.jpg)
- Scammer รายแรก เลือกผลงานที่จะซื้อ เสนอราคาซื้อที่สูงมาก แต่ต้องเป็นเวบไซต์ NFTmarket place ที่เขากำหนดเท่านั้น มันทำงานอยู่บน ERC20 ผมเข้าไปดูเวบไซต์ที่ว่านี้แล้วรู้สึกว่ามันดูแปลกๆครับ คนที่จะลงขายผลงานจะต้องใช้ email ในการสมัครบัญชีซะก่อน ถึงจะผูก wallet อย่างเช่น metamask ได้ เมื่อผูก wallet แล้วไม่สามารถเปลี่ยนได้ด้วย ตอนนั้นผมใช้ wallet ที่ไม่ได้ link กับ HW wallet ไว้ ทดลองสลับ wallet ไปๆมาๆ มันทำไม่ได้ แถมลอง log out แล้ว เลข wallet ก็ยังคาอยู่อันเดิม อันนี้มันดูแปลกๆแล้วหนึ่งอย่าง เวบนี้ค่า ETH ในการ mint **0.15 - 0.2 ETH** … ตีเป็นเงินบาทนี่แพงบรรลัยอยู่นะครับ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148387032-YAKIHONNES3.jpg)
- Scammer รายแรกพยายามชักจูงผม หว่านล้อมผมว่า แหม เดี๋ยวเขาก็มารับซื้องานผมน่า mint งานเสร็จ รีบบอกเขานะ เดี๋ยวเขารีบกดซื้อเลย พอขายได้กำไร ผมก็ได้ค่า gas คืนได้ แถมยังได้กำไรอีก ไม่มีอะไรต้องเสีนจริงมั้ย แต่มันเป้นความโชคดีครับ เพราะตอนนั้นผมไม่เหลือทุนสำรองที่จะมาซื้อ ETH ได้ ผมเลยต่อรองกับเขาตามนี้ครับ :
1. ผมเสนอว่า เอางี้มั้ย ผมส่งผลงานของผมแบบ low resolution ให้ก่อน แลกกับให้เขาช่วยโอน ETH ที่เป็นค่า mint งานมาให้หน่อย พอผมได้ ETH แล้ว ผมจะ upscale งานของผม แล้วเมล์ไปให้ ใจแลกใจกันไปเลย ... เขาไม่เอา
2. ผมเสนอให้ไปซื้อที่ร้านค้าออนไลน์ buymeacoffee ของผมมั้ย จ่ายเป็น USD ... เขาไม่เอา
3. ผมเสนอให้ซื้อขายผ่าน PPV lightning invoice ที่ผมมีสิทธิ์เข้าถึง เพราะเป็น creator ของ Creatr ... เขาไม่เอา
4. ผมยอกเขาว่างั้นก็รอนะ รอเงินเดือนออก เขาบอก ok
สัปดาห์ถัดมา มี scammer คนที่สองติดต่อผมเข้ามา ใช้วิธีการใกล้เคียงกัน แต่ใช้คนละเวบ แถมเสนอราคาซื้อที่สูงกว่าคนแรกมาก เวบที่สองนี้เลวร้ายค่าเวบแรกอีกครับ คือต้องใช้เมล์สมัครบัญชี ไม่สามารถผูก metamask ได้ พอสมัครเสร็จจะได้ wallet เปล่าๆมาหนึ่งอัน ผมต้องโอน ETH เข้าไปใน wallet นั้นก่อน เพื่อเอาไปเป็นค่า mint NFT **0.2 ETH**
ผมบอก scammer รายที่สองว่า ต้องรอนะ เพราะตอนนี้กำลังติดต่อซื้อขายอยู่กับผู้ซื้อรายแรกอยู่ ผมกำลังรอเงินเพื่อมาซื้อ ETH เป็นต้นทุนดำเนินงานอยู่ คนคนนี้ขอให้ผมส่งเวบแรกไปให้เขาดูหน่อย หลังจากนั้นไม่นานเขาเตือนผมมาว่าเวบแรกมันคือ scam นะ ไม่สามารถถอนเงินออกมาได้ เขายังส่งรูป cap หน้าจอที่คุยกับผู้เสียหายจากเวบแรกมาให้ดูว่าเจอปัญหาถอนเงินไม่ได้ ไม่พอ เขายังบลัฟ opensea ด้วยว่าลูกค้าขายงานได้ แต่ถอนเงินไม่ได้
**Opensea ถอนเงินไม่ได้ ตรงนี้แหละครับคือตัวกระตุกต่อมเอ๊ะของผมดังมาก** เพราะ opensea อ่ะ ผู้ใช้ connect wallet เข้ากับ marketplace โดยตรง ซื้อขายกันเกิดขึ้น เงินวิ่งเข้าวิ่งออก wallet ของแต่ละคนโดยตรงเลย opensea เก็บแค่ค่า fee ในการใช้ platform ไม่เก็บเงินลูกค้าไว้ แถมปีนี้ค่า gas fee ก็ถูกกว่า bull run cycle 2020 มาก ตอนนี้ค่า gas fee ประมาณ 0.0001 ETH (แต่มันก็แพงกว่า BTC อยู่ดีอ่ะครับ)
ผมเลยเอาเรื่องนี้ไปปรึกษาพี่บิท แต่แอดมินมาคุยกับผมแทน ทางแอดมินแจ้งว่ายังไม่เคยมีเพื่อนๆมาปรึกษาเรื่องนี้ กรณีที่ผมทักมาถามนี่เป็นรายแรกเลย แต่แอดมินให้ความเห็นไปในทางเดียวกับสมมุติฐานของผมว่าน่าจะ scam ในเวลาเดียวกับผมเอาเรื่องนี้ไปถามในเพจ NFT community คนไทนด้วย ได้รับการ confirm ชัดเจนว่า scam และมีคนไม่น้อยโดนหลอก หลังจากที่ผมรู้ที่มาแล้ว ผมเลยเล่นสงครามปั่นประสาท scammer ทั้งสองคนนี้ครับ เพื่อดูว่าหลอกหลวงมิจฉาชีพจริงมั้ย
โดยวันที่ 30 กย. ผมเลยปั่นประสาน scammer ทั้งสองรายนี้ โดยการ mint ผลงานที่เขาเสนอซื้อนั่นแหละ ขึ้น opensea
แล้วส่งข้อความไปบอกว่า
mint ให้แล้วนะ แต่เงินไม่พอจริงๆว่ะโทษที เลย mint ขึ้น opensea แทน พอดีบ้านจน ทำได้แค่นี้ไปถึงแค่ opensea รีบไปซื้อล่ะ มีคนจ้องจะคว้างานผมเยอะอยู่ ผมไม่คิด royalty fee ด้วยนะเฮ้ย เอาไปขายต่อไม่ต้องแบ่งกำไรกับผม
เท่านั้นแหละครับ สงครามจิตวิทยาก็เริ่มขึ้น แต่เขาจนมุม กลืนน้ำลายตัวเอง
ช็อตเด็ดคือ
เขา : เนี่ยอุส่ารอ บอกเพื่อนในทีมว่าวันจันทร์ที่ 30 กย. ได้ของแน่ๆ เพื่อนๆในทีมเห็นงานผมแล้วมันสวยจริง เลยใส่เงินเต็มที่ 9.3ETH (+ capture screen ส่งตัวเลขยอดเงินมาให้ดู)ไว้รอโดยเฉพาะเลยนะ
ผม : เหรอ ... งั้น ขอดู wallet address ที่มี transaction มาให้ดูหน่อยสิ
เขา : 2ETH นี่มัน 5000$ เลยนะ
ผม : แล้วไง ขอดู wallet address ที่มีการเอายอดเงิน 9.3ETH มาให้ดูหน่อย ไหนบอกว่าเตรียมเงินไว้มากแล้วนี่ ขอดูหน่อย ว่าใส่ไว้เมื่อไหร่ ... เอามาแค่ adrress นะเว้ย ไม่ต้องทะลึ่งส่ง seed มาให้
เขา : ส่งรูปเดิม 9.3 ETH มาให้ดู
ผม : รูป screenshot อ่ะ มันไม่มีความหมายหรอกเว้ย ตัดต่อเอาก็ได้ง่ายจะตาย เอา transaction hash มาดู ไหนว่าเตรียมเงินไว้รอ 9.3ETH แล้วอยากซื้องานผมจนตัวสั่นเลยไม่ใช่เหรอ ถ้าจะส่ง wallet address มาให้ดู หรือจะช่วยส่ง 0.15ETH มาให้ยืม mint งานก่อน แล้วมากดซื้อ 2ETH ไป แล้วผมใช้ 0.15ETH คืนให้ก็ได้ จะซื้อหรือไม่ซื้อเนี่ย
เขา : จะเอา address เขาไปทำไม
ผม : ตัดจบ รำคาญ ไม่ขายให้ละ
เขา : 2ETH = 5000 USD เลยนะ
ผม : แล้วไง
ผมเลยเขียนบทความนี้มาเตือนเพื่อนๆพี่ๆทุกคนครับ เผื่อใครกำลังเปิดพอร์ตทำธุรกิจขาย digital art online แล้วจะโชคดี เจอของดีแบบผม
-----------
### ทำไมผมถึงมั่นใจว่ามันคือการหลอกหลวง แล้วคนโกงจะได้อะไร
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148837871-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010196295)
อันดับแรกไปพิจารณาดู opensea ครับ เป็นเวบ NFTmarketplace ที่ volume การซื้อขายสูงที่สุด เขาไม่เก็บเงินของคนจะซื้อจะขายกันไว้กับตัวเอง เงินวิ่งเข้าวิ่งออก wallet ผู้ซื้อผู้ขายเลย ส่วนทางเวบเก็บค่าธรรมเนียมเท่านั้น แถมค่าธรรมเนียมก็ถูกกว่าเมื่อปี 2020 เยอะ ดังนั้นการที่จะไปลงขายงานบนเวบ NFT อื่นที่ค่า fee สูงกว่ากันเป็นร้อยเท่า ... จะทำไปทำไม
ผมเชื่อว่า scammer โกงเงินเจ้าของผลงานโดยการเล่นกับความโลภและความอ่อนประสบการณ์ของเจ้าของผลงานครับ เมื่อไหร่ก็ตามที่เจ้าของผลงานโอน ETH เข้าไปใน wallet เวบนั้นเมื่อไหร่ หรือเมื่อไหร่ก็ตามที่จ่ายค่า fee ในการ mint งาน เงินเหล่านั้นสิ่งเข้ากระเป๋า scammer ทันที แล้วก็จะมีการเล่นตุกติกต่อแน่นอนครับ เช่นถอนไม่ได้ หรือซื้อไม่ได้ ต้องโอนเงินมาเพิ่มเพื่อปลดล็อค smart contract อะไรก็ว่าไป แล้วคนนิสัยไม่ดีพวกเนี้ย ก็จะเล่นกับความโลภของคน เอาราคาเสนอซื้อที่สูงโคตรๆมาล่อ ... อันนี้ไม่ว่ากัน เพราะบนโลก NFT รูปภาพบางรูปที่ไม่ได้มีความเป็นศิลปะอะไรเลย มันดันขายกันได้ 100 - 150 ETH ศิลปินที่พยายามสร้างตัวก็อาจจะมองว่า ผลงานเรามีคนรับซื้อ 2 - 4 ETH ต่องานมันก็มากพอแล้ว (จริงๆมากเกินจนน่าตกใจด้วยซ้ำครับ)
บนโลกของ BTC ไม่ต้องเชื่อใจกัน โอนเงินไปหากันได้ ปิดสมุดบัญชีได้โดยไม่ต้องเชื่อใจกัน
บบโลกของ ETH **"code is law"** smart contract มีเขียนอยู่แล้ว ไปอ่าน มันไม่ได้ยากมากในการทำความเข้าใจ ดังนั้น การจะมาเชื่อคำสัญญาจากคนด้วยกัน เป็นอะไรที่ไม่มีเหตุผล
ผมไปเล่าเรื่องเหล่านี้ให้กับ community งานศิลปะ ก็มีทั้งเสียงตอบรับที่ดี และไม่ดีปนกันไป มีบางคนยืนยันเสียงแข็งไปในทำนองว่า ไอ้เรื่องแบบเนี้ยไม่ได้กินเขาหรอก เพราะเขาตั้งใจแน่วแน่ว่างานศิลป์ของเขา เขาไม่เอาเข้ามายุ่งในโลก digital currency เด็ดขาด ซึ่งผมก็เคารพมุมมองเขาครับ แต่มันจะดีกว่ามั้ย ถ้าเราเปิดหูเปิดตาให้ทันเทคโนโลยี โดยเฉพาะเรื่อง digital currency , blockchain โดนโกงทีนึงนี่คือหมดตัวกันง่ายกว่าเงิน fiat อีก
อยากจะมาเล่าให้ฟังครับ และอยากให้ช่วยแชร์ไปให้คนรู้จักด้วย จะได้ระวังตัวกัน
## Note
- ภาพประกอบ cyber security ทั้งสองนี่ของผมเองครับ ทำเอง วางขายบน AdobeStock
- อีกบัญชีนึงของผม "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw กำลังค่อยๆเอาผลงานจากโลกข้างนอกเข้ามา nostr ครับ ตั้งใจจะมาสร้างงานศิลปะในนี้ เพื่อนๆที่ชอบงาน จะได้ไม่ต้องออกไปหาที่ไหน
ผลงานของผมครับ
- Anime girl fanarts : [HikariHarmony](https://linktr.ee/hikariharmonypatreon)
- [HikariHarmony on Nostr](https://shorturl.at/I8Nu4)
- General art : [KeshikiRakuen](https://linktr.ee/keshikirakuen)
- KeshikiRakuen อาจจะเป็นบัญชี nostr ที่สามของผม ถ้าไหวครับ
-
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148821549-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010191703)
**Hello everyone on Nostr** and all my **watchers**and **followers**from DeviantArt, as well as those from other art platforms
I have been creating and sharing AI-generated anime girl fanart since the beginning of 2024 and have been running member-exclusive content on Patreon.
I also publish showcases of my artworks to Deviantart. I organically build up my audience from time to time. I consider it as one of my online businesses of art. Everything is slowly growing
**On September 16**, I received a DM from someone expressing interest in purchasing my art in NFT format and offering a very high price for each piece. We later continued the conversation via email.
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148088676-YAKIHONNES3.PNG)
### Here’s a brief overview of what happened
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148348755-YAKIHONNES3.jpg)
- The first scammer selected the art they wanted to buy and offered a high price for each piece.
They provided a URL to an NFT marketplace site running on the Ethereum (ETH) mainnet or ERC20. The site appeared suspicious, requiring email sign-up and linking a MetaMask wallet. However, I couldn't change the wallet address later.
The minting gas fees were quite expensive, ranging from **0.15 to 0.2 ETH**
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148387032-YAKIHONNES3.jpg)
- The scammers tried to convince me that the high profits would easily cover the minting gas fees, so I had nothing to lose.
Luckily, I didn’t have spare funds to purchase ETH for the gas fees at the time, so I tried negotiating with them as follows:
1. I offered to send them a lower-quality version of my art via email in exchange for the minting gas fees, but they refused.
2. I offered them the option to pay in USD through Buy Me a Coffee shop here, but they refused.
3. I offered them the option to pay via Bitcoin using the Lightning Network invoice , but they refused.
4. I asked them to wait until I could secure the funds, and they agreed to wait.
The following week, a second scammer approached me with a similar offer, this time at an even higher price and through a different NFT marketplace website.
This second site also required email registration, and after navigating to the dashboard, it asked for a minting fee of **0.2 ETH**. However, the site provided a wallet address for me instead of connecting a MetaMask wallet.
I told the second scammer that I was waiting to make a profit from the first sale, and they asked me to show them the first marketplace. They then warned me that the first site was a scam and even sent screenshots of victims, including one from OpenSea saying that Opensea is not paying.
**This raised a red flag**, and I began suspecting I might be getting scammed. On OpenSea, funds go directly to users' wallets after transactions, and OpenSea charges a much lower platform fee compared to the previous crypto bull run in 2020. Minting fees on OpenSea are also significantly cheaper, around 0.0001 ETH per transaction.
I also consulted with Thai NFT artist communities and the ex-chairman of the Thai Digital Asset Association. According to them, no one had reported similar issues, but they agreed it seemed like a scam.
After confirming my suspicions with my own research and consulting with the Thai crypto community, I decided to test the scammers’ intentions by doing the following
I minted the artwork they were interested in, set the price they offered, and listed it for sale on OpenSea. I then messaged them, letting them know the art was available and ready to purchase, with no royalty fees if they wanted to resell it.
They became upset and angry, insisting I mint the art on their chosen platform, claiming they had already funded their wallet to support me. When I asked for proof of their wallet address and transactions, they couldn't provide any evidence that they had enough funds.
Here’s what I want to warn all artists in the DeviantArt community or other platforms
If you find yourself in a similar situation, be aware that scammers may be targeting you.
-----------
### My Perspective why I Believe This is a Scam and What the Scammers Gain
[![image](https://yakihonne.s3.ap-east-1.amazonaws.com/8947a94537bdcd2e62d0b40db57636ece30345a0f63c806b530a5f1f9bfcf626/files/1729148837871-YAKIHONNES3.jpeg)](https://stock.adobe.com/stock-photo/id/1010196295)
From my experience with BTC and crypto since 2017, here's why I believe this situation is a scam, and what the scammers aim to achieve
First, looking at OpenSea, the largest NFT marketplace on the ERC20 network, they do not hold users' funds. Instead, funds from transactions go directly to users’ wallets. OpenSea’s platform fees are also much lower now compared to the crypto bull run in 2020. This alone raises suspicion about the legitimacy of other marketplaces requiring significantly higher fees.
I believe the scammers' tactic is to lure artists into paying these exorbitant minting fees, which go directly into the scammers' wallets. They convince the artists by promising to purchase the art at a higher price, making it seem like there's no risk involved. In reality, the artist has already lost by paying the minting fee, and no purchase is ever made.
In the world of Bitcoin (BTC), the principle is "Trust no one" and “Trustless finality of transactions” In other words, transactions are secure and final without needing trust in a third party.
In the world of Ethereum (ETH), the philosophy is "Code is law" where everything is governed by smart contracts deployed on the blockchain. These contracts are transparent, and even basic code can be read and understood. Promises made by people don’t override what the code says.
I also discuss this issue with art communities. Some people have strongly expressed to me that they want nothing to do with crypto as part of their art process. I completely respect that stance.
However, I believe it's wise to keep your eyes open, have some skin in the game, and not fall into scammers’ traps. Understanding the basics of crypto and NFTs can help protect you from these kinds of schemes.
If you found this article helpful, please share it with your fellow artists.
Until next time
Take care
## Note
- Both cyber security images are mine , I created and approved by AdobeStock to put on sale
- I'm working very hard to bring all my digital arts into Nostr to build my Sats business here to my another npub "HikariHarmony" npub1exdtszhpw3ep643p9z8pahkw8zw00xa9pesf0u4txyyfqvthwapqwh48sw
Link to my full gallery
- Anime girl fanarts : [HikariHarmony](https://linktr.ee/hikariharmonypatreon)
- [HikariHarmony on Nostr](https://shorturl.at/I8Nu4)
- General art : [KeshikiRakuen](https://linktr.ee/keshikirakuen)
-
Hey folks, today we're diving into an exciting and emerging topic: personal artificial intelligence (PAI) and its connection to sovereignty, privacy, and ethics. With the rapid advancements in AI, there's a growing interest in the development of personal AI agents that can work on behalf of the user, acting autonomously and providing tailored services. However, as with any new technology, there are several critical factors that shape the future of PAI. Today, we'll explore three key pillars: privacy and ownership, explainability, and bias.
<iframe width="560" height="315" src="https://www.youtube.com/embed/fehgwnSUcqQ?si=nPK7UOFr19BT5ifm" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
### 1. Privacy and Ownership: Foundations of Personal AI
At the heart of personal AI, much like self-sovereign identity (SSI), is the concept of ownership. For personal AI to be truly effective and valuable, users must own not only their data but also the computational power that drives these systems. This autonomy is essential for creating systems that respect the user's privacy and operate independently of large corporations.
In this context, privacy is more than just a feature—it's a fundamental right. Users should feel safe discussing sensitive topics with their AI, knowing that their data won’t be repurposed or misused by big tech companies. This level of control and data ownership ensures that users remain the sole beneficiaries of their information and computational resources, making privacy one of the core pillars of PAI.
### 2. Bias and Fairness: The Ethical Dilemma of LLMs
Most of today’s AI systems, including personal AI, rely heavily on large language models (LLMs). These models are trained on vast datasets that represent snapshots of the internet, but this introduces a critical ethical challenge: bias. The datasets used for training LLMs can be full of biases, misinformation, and viewpoints that may not align with a user’s personal values.
This leads to one of the major issues in AI ethics for personal AI—how do we ensure fairness and minimize bias in these systems? The training data that LLMs use can introduce perspectives that are not only unrepresentative but potentially harmful or unfair. As users of personal AI, we need systems that are free from such biases and can be tailored to our individual needs and ethical frameworks.
Unfortunately, training models that are truly unbiased and fair requires vast computational resources and significant investment. While large tech companies have the financial means to develop and train these models, individual users or smaller organizations typically do not. This limitation means that users often have to rely on pre-trained models, which may not fully align with their personal ethics or preferences. While fine-tuning models with personalized datasets can help, it's not a perfect solution, and bias remains a significant challenge.
### 3. Explainability: The Need for Transparency
One of the most frustrating aspects of modern AI is the lack of explainability. Many LLMs operate as "black boxes," meaning that while they provide answers or make decisions, it's often unclear how they arrived at those conclusions. For personal AI to be effective and trustworthy, it must be transparent. Users need to understand how the AI processes information, what data it relies on, and the reasoning behind its conclusions.
Explainability becomes even more critical when AI is used for complex decision-making, especially in areas that impact other people. If an AI is making recommendations, judgments, or decisions, it’s crucial for users to be able to trace the reasoning process behind those actions. Without this transparency, users may end up relying on AI systems that provide flawed or biased outcomes, potentially causing harm.
This lack of transparency is a major hurdle for personal AI development. Current LLMs, as mentioned earlier, are often opaque, making it difficult for users to trust their outputs fully. The explainability of AI systems will need to be improved significantly to ensure that personal AI can be trusted for important tasks.
### Addressing the Ethical Landscape of Personal AI
As personal AI systems evolve, they will increasingly shape the ethical landscape of AI. We’ve already touched on the three core pillars—privacy and ownership, bias and fairness, and explainability. But there's more to consider, especially when looking at the broader implications of personal AI development.
Most current AI models, particularly those from big tech companies like Facebook, Google, or OpenAI, are closed systems. This means they are aligned with the goals and ethical frameworks of those companies, which may not always serve the best interests of individual users. Open models, such as Meta's LLaMA, offer more flexibility and control, allowing users to customize and refine the AI to better meet their personal needs. However, the challenge remains in training these models without significant financial and technical resources.
There’s also the temptation to use uncensored models that aren’t aligned with the values of large corporations, as they provide more freedom and flexibility. But in reality, models that are entirely unfiltered may introduce harmful or unethical content. It’s often better to work with aligned models that have had some of the more problematic biases removed, even if this limits some aspects of the system’s freedom.
The future of personal AI will undoubtedly involve a deeper exploration of these ethical questions. As AI becomes more integrated into our daily lives, the need for privacy, fairness, and transparency will only grow. And while we may not yet be able to train personal AI models from scratch, we can continue to shape and refine these systems through curated datasets and ongoing development.
### Conclusion
In conclusion, personal AI represents an exciting new frontier, but one that must be navigated with care. Privacy, ownership, bias, and explainability are all essential pillars that will define the future of these systems. As we continue to develop personal AI, we must remain vigilant about the ethical challenges they pose, ensuring that they serve the best interests of users while remaining transparent, fair, and aligned with individual values.
If you have any thoughts or questions on this topic, feel free to reach out—I’d love to continue the conversation!
-
In the modern world of AI, managing vast amounts of data while keeping it relevant and accessible is a significant challenge, mainly when dealing with large language models (LLMs) and vector databases. One approach that has gained prominence in recent years is integrating vector search with metadata, especially in retrieval-augmented generation (RAG) pipelines. Vector search and metadata enable faster and more accurate data retrieval. However, the process of pre- and post-search filtering results plays a crucial role in ensuring data relevance.
<iframe width="560" height="315" src="https://www.youtube.com/embed/BkNqu51et9U?si=lne0jWxdrZPxSgd1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
## The Vector Search and Metadata Challenge
In a typical vector search, you create embeddings from chunks of text, such as a PDF document. These embeddings allow the system to search for similar items and retrieve them based on relevance. The challenge, however, arises when you need to combine vector search results with structured metadata. For example, you may have timestamped text-based content and want to retrieve the most relevant content within a specific date range. This is where metadata becomes critical in refining search results.
Unfortunately, most vector databases treat metadata as a secondary feature, isolating it from the primary vector search process. As a result, handling queries that combine vectors and metadata can become a challenge, particularly when the search needs to account for a dynamic range of filters, such as dates or other structured data.
## LibSQL and vector search metadata
LibSQL is a more general-purpose SQLite-based database that adds vector capabilities to regular data. Vectors are presented as blob columns of regular tables. It makes vector embeddings and metadata a first-class citizen that naturally builds deep integration of these data points.
```
create table if not exists conversation (
id varchar(36) primary key not null,
startDate real,
endDate real,
summary text,
vectorSummary F32_BLOB(512)
);
```
It solves the challenge of metadata and vector search and eliminates impedance between vector data and regular structured data points in the same storage.
As you can see, you can access vector-like data and start date in the same query.
```
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance
from conversation
where
${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''}
${distance ? `and distance <= ${distance}` : ''}
order by distance
limit ${top};
```
**vector\_distance\_cos** calculated as distance allows us to make a primitive vector search that does a full scan and calculates distances on rows. We could optimize it with CTE and limit search and distance calculations to a much smaller subset of data.
This approach could be calculation intensive and fail on large amounts of data.
Libsql offers a way more effective vector search based on FlashDiskANN vector indexed.
```
vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
```
**vector\_top\_k** is a table function that searches for the top of the newly created vector search index. As you can see, we could use only vector as a function parameter, and other columns could be used outside of the table function. So, to use a vector index together with different columns, we need to apply some strategies.
Now we get a classical problem of integration vector search results with metadata queries.
## Post-Filtering: A Common Approach
The most widely adopted method in these pipelines is **post-filtering**. In this approach, the system first retrieves data based on vector similarities and then applies metadata filters. For example, imagine you’re conducting a vector search to retrieve conversations relevant to a specific question. Still, you also want to ensure these conversations occurred in the past week.
![](https://miro.medium.com/v2/resize:fit:1400/1*WFR7oJIwYflxiUivSrm-5g.png)
Post-filtering allows the system to retrieve the most relevant vector-based results and subsequently filter out any that don’t meet the metadata criteria, such as date range. This method is efficient when vector similarity is the primary factor driving the search, and metadata is only applied as a secondary filter.
```
const sqlQuery = `
select c.id ,c.startDate, c.endDate, c.summary, vector_distance_cos(c.vectorSummary, vector(${vector})) distance
from vector_top_k('idx_conversation_vectorSummary', ${vector} , ${top}) i
inner join conversation c on i.id = c.rowid
where
${startDate ? `and c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `and c.endDate <= ${endDate.getTime()}` : ''}
${distance ? `and distance <= ${distance}` : ''}
order by distance
limit ${top};
```
However, there are some limitations. For example, the initial vector search may yield fewer results or omit some relevant data before applying the metadata filter. If the search window is narrow enough, this can lead to complete results.
One working strategy is to make the top value in vector\_top\_K much bigger. Be careful, though, as the function's default max number of results is around 200 rows.
## Pre-Filtering: A More Complex Approach
Pre-filtering is a more intricate approach but can be more effective in some instances. In pre-filtering, metadata is used as the primary filter before vector search takes place. This means that only data that meets the metadata criteria is passed into the vector search process, limiting the scope of the search right from the beginning.
While this approach can significantly reduce the amount of irrelevant data in the final results, it comes with its own challenges. For example, pre-filtering requires a deeper understanding of the data structure and may necessitate denormalizing the data or creating separate pre-filtered tables. This can be resource-intensive and, in some cases, impractical for dynamic metadata like date ranges.
![](https://miro.medium.com/v2/resize:fit:1400/1*TioR1ETZ2S_FxvW6rZx9nw.png)
In certain use cases, pre-filtering might outperform post-filtering. For instance, when the metadata (e.g., specific date ranges) is the most important filter, pre-filtering ensures the search is conducted only on the most relevant data.
## Pre-filtering with distance-based filtering
So, we are getting back to an old concept. We do prefiltering instead of using a vector index.
```
WITH FilteredDates AS (
SELECT
c.id,
c.startDate,
c.endDate,
c.summary,
c.vectorSummary
FROM
YourTable c
WHERE
${startDate ? `AND c.startDate >= ${startDate.getTime()}` : ''}
${endDate ? `AND c.endDate <= ${endDate.getTime()}` : ''}
),
DistanceCalculation AS (
SELECT
fd.id,
fd.startDate,
fd.endDate,
fd.summary,
fd.vectorSummary,
vector_distance_cos(fd.vectorSummary, vector(${vector})) AS distance
FROM
FilteredDates fd
)
SELECT
dc.id,
dc.startDate,
dc.endDate,
dc.summary,
dc.distance
FROM
DistanceCalculation dc
WHERE
1=1
${distance ? `AND dc.distance <= ${distance}` : ''}
ORDER BY
dc.distance
LIMIT ${top};
```
It makes sense if the filter produces small data and distance calculation happens on the smaller data set.
![](https://miro.medium.com/v2/resize:fit:1400/1*NewbEonzeEgS7qPOZRgXwg.png)
As a pro of this approach, you have full control over the data and get all results without omitting some typical values for extensive index searches.
## Choosing Between Pre and Post-Filtering
Both pre-filtering and post-filtering have their advantages and disadvantages. Post-filtering is more accessible to implement, especially when vector similarity is the primary search factor, but it can lead to incomplete results. Pre-filtering, on the other hand, can yield more accurate results but requires more complex data handling and optimization.
In practice, many systems combine both strategies, depending on the query. For example, they might start with a broad pre-filtering based on metadata (like date ranges) and then apply a more targeted vector search with post-filtering to refine the results further.
## **Conclusion**
Vector search with metadata filtering offers a powerful approach for handling large-scale data retrieval in LLMs and RAG pipelines. Whether you choose pre-filtering or post-filtering—or a combination of both—depends on your application's specific requirements. As vector databases continue to evolve, future innovations that combine these two approaches more seamlessly will help improve data relevance and retrieval efficiency further.
-
# Nostr: a quick introduction, attempt #2
Nostr doesn't subscribe to any ideals of "free speech" as these belong to the realm of politics and assume a big powerful government that enforces a common ruleupon everybody else.
Nostr instead is much simpler, it simply says that servers are private property and establishes a generalized framework for people to connect to all these servers, creating a true free market in the process. In other words, Nostr is the public road that each market participant can use to build their own store or visit others and use their services.
(Of course a road is never truly public, in normal cases it's ran by the government, in this case it relies upon the previous existence of the internet with all its quirks and chaos plus a hand of government control, but none of that matters for this explanation).
More concretely speaking, Nostr is just a set of definitions of the formats of the data that can be passed between participants and their expected order, i.e. messages between _clients_ (i.e. the program that runs on a user computer) and _relays_ (i.e. the program that runs on a publicly accessible computer, a "server", generally with a domain-name associated) over a type of TCP connection (WebSocket) with cryptographic signatures. This is what is called a "protocol" in this context, and upon that simple base multiple kinds of sub-protocols can be added, like a protocol for "public-square style microblogging", "semi-closed group chat" or, I don't know, "recipe sharing and feedback".
-
Preston Pysh posted this event this morning:
![Nostr Image](https://laantungir.github.io/img_repo/7467e2bdc452235aacca83aa96334499c04934c51597c5213870f72ce027216f.png "BH2024")
Behind the scenes, the nostr event looks like this:
```
Event = {
"id":"a6fa7e1a73ce70c6fb01584a0519fd29788e59d9980402584e7a0af92cf0474a",
"pubkey":"85080d3bad70ccdcd7f74c29a44f55bb85cbcd3dd0cbb957da1d215bdb931204",
"created_at":1724494504,
"kind":1,
"tags":[
[
"p",
"6c237d8b3b120251c38c230c06d9e48f0d3017657c5b65c8c36112eb15c52aeb",
"",
"mention"
],
[
"p",
"77ec966fcd64f901152cad5dc7731c7c831fe22e02e3ae99ff14637e5a48ef9c",
"",
"mention"
],
[
"p",
"c1fc7771f5fa418fd3ac49221a18f19b42ccb7a663da8f04cbbf6c08c80d20b1",
"",
"mention"
],
[
"p",
"50d94fc2d8580c682b071a542f8b1e31a200b0508bab95a33bef0855df281d63",
"",
"mention"
],
[
"p",
"20d88bae0c38e6407279e6a83350a931e714f0135e013ea4a1b14f936b7fead5",
"",
"mention"
],
[
"p",
"273e7880d38d39a7fb238efcf8957a1b5b27e819127a8483e975416a0a90f8d2",
"",
"mention"
],
[
"t",
"BH2024"
]
],
"content":"Awesome Freedom Panel with...",
"sig":"2b64e461cd9f5a7aa8abbcbcfd953536f10a334b631a352cd4124e8e187c71aad08be9aefb6a68e5c060e676d06b61c553e821286ea42489f9e7e7107a1bf79a"
}
```
In nostr, all events have this form, so once you become familiar with the nostr event structure, things become pretty easy.
Look at the "tags" key. There are six "p" tags (pubkey) and one "t" tag (hashtag).
The p tags are public keys of people that are mentioned in the note. The t tags are for hashtags in the note.
It is common when working with NOSTR that you have to extract out certain tags. Here are some examples of how to do that with what are called JavaScript Array Methods:
### Find the first "p" tag element:
```
Event.tags.find(item => item[0] === 'p')
[
'p',
'6c237d8b3b120251c38c230c06d9e48f0d3017657c5b65c8c36112eb15c52aeb',
'',
'mention'
]
```
### Same, but just return the pubkey":
```
Event.tags.find(item => item[0] === 'p')[1]
'6c237d8b3b120251c38c230c06d9e48f0d3017657c5b65c8c36112eb15c52aeb'
```
### Filter the array so I only get "p" tags:
```
Event.tags.filter(item => item[0] === 'p')
[
[
'p',
'6c237d8b3b120251c38c230c06d9e48f0d3017657c5b65c8c36112eb15c52aeb',
'',
'mention'
],
[
'p',
'77ec966fcd64f901152cad5dc7731c7c831fe22e02e3ae99ff14637e5a48ef9c',
'',
'mention'
],
[
'p',
'c1fc7771f5fa418fd3ac49221a18f19b42ccb7a663da8f04cbbf6c08c80d20b1',
'',
'mention'
],
[
'p',
'50d94fc2d8580c682b071a542f8b1e31a200b0508bab95a33bef0855df281d63',
'',
'mention'
],
[
'p',
'20d88bae0c38e6407279e6a83350a931e714f0135e013ea4a1b14f936b7fead5',
'',
'mention'
],
[
'p',
'273e7880d38d39a7fb238efcf8957a1b5b27e819127a8483e975416a0a90f8d2',
'',
'mention'
]
]
```
### Return an array with only the pubkeys in the "p" tags:
```
Event.tags.filter(item => item[0] === 'p').map(item => item[1])
[
'6c237d8b3b120251c38c230c06d9e48f0d3017657c5b65c8c36112eb15c52aeb',
'77ec966fcd64f901152cad5dc7731c7c831fe22e02e3ae99ff14637e5a48ef9c',
'c1fc7771f5fa418fd3ac49221a18f19b42ccb7a663da8f04cbbf6c08c80d20b1',
'50d94fc2d8580c682b071a542f8b1e31a200b0508bab95a33bef0855df281d63',
'20d88bae0c38e6407279e6a83350a931e714f0135e013ea4a1b14f936b7fead5',
'273e7880d38d39a7fb238efcf8957a1b5b27e819127a8483e975416a0a90f8d2'
]
```
-
After months of development I am excited to officially announce the first version of DVMDash (v0.1). DVMDash is a monitoring and debugging tool for all Data Vending Machine (DVM) activity on Nostr. The website is live at [https://dvmdash.live](https://dvmdash.live) and the code is available on [Github](https://github.com/dtdannen/dvmdash).
Data Vending Machines ([NIP-90](https://github.com/nostr-protocol/nips/blob/master/90.md)) offload computationally expensive tasks from relays and clients in a decentralized, free-market manner. They are especially useful for AI tools, algorithmic processing of user’s feeds, and many other use cases.
The long term goal of DVMDash is to become 1) a place to easily see what’s happening in the DVM ecosystem with metrics and graphs, and 2) provide real-time tools to help developers monitor, debug, and improve their DVMs.
DVMDash aims to enable users to answer these types of questions at a glance:
* What’s the most popular DVM right now?
* How much money is being paid to image generation DVMs?
* Is any DVM down at the moment? When was the last time that DVM completed a task?
* Have any DVMs failed to deliver after accepting payment? Did they refund that payment?
* How long does it take this DVM to respond?
* For task X, what’s the average amount of time it takes for a DVM to complete the task?
* … and more
For developers working with DVMs there is now a visual, graph based tool that shows DVM-chain activity. DVMs have already started calling other DVMs to assist with work. Soon, we will have humans in the loop monitoring DVM activity, or completing tasks themselves. The activity trace of which DVM is being called as part of a sub-task from another DVM will become complicated, especially because these decisions will be made at run-time and are not known ahead of time. Building a tool to help users and developers understand where a DVM is in this activity trace, whether it’s gotten stuck or is just taking a long time, will be invaluable. *For now, the website only shows 1 step of a dvm chain from a user's request.*
One of the main designs for the site is that it is highly _clickable_, meaning whenever you see a DVM, Kind, User, or Event ID, you can click it and open that up in a new page to inspect it.
Another aspect of this website is that it should be fast. If you submit a DVM request, you should see it in DVMDash within seconds, as well as events from DVMs interacting with your request. I have attempted to obtain DVM events from relays as quickly as possible and compute metrics over them within seconds.
This project makes use of a nosql database and graph database, currently set to use mongo db and neo4j, for which there are free, community versions that can be run locally.
Finally, I’m grateful to nostr:npub10pensatlcfwktnvjjw2dtem38n6rvw8g6fv73h84cuacxn4c28eqyfn34f for supporting this project.
## Features in v0.1:
### Global Network Metrics:
This page shows the following metrics:
- **DVM Requests:** Number of unencrypted DVM requests (kind 5000-5999)
- **DVM Results:** Number of unencrypted DVM results (kind 6000-6999)
- **DVM Request Kinds Seen:** Number of unique kinds in the Kind range 5000-5999 (except for known non-DVM kinds 5666 and 5969)
- **DVM Result Kinds Seen:** Number of unique kinds in the Kind range 6000-6999 (except for known non-DVM kinds 6666 and 6969)
- **DVM Pub Keys Seen:** Number of unique pub keys that have written a kind 6000-6999 (except for known non-DVM kinds) or have published a kind 31990 event that specifies a ‘k’ tag value between 5000-5999
- **DVM Profiles (NIP-89) Seen**: Number of 31990 that have a ‘k’ tag value for kind 5000-5999
- **Most Popular DVM**: The DVM that has produced the most result events (kind 6000-6999)
- **Most Popular Kind**: The Kind in range 5000-5999 that has the most requests by users.
- **24 hr DVM Requests**: Number of kind 5000-5999 events created in the last 24 hrs
- **24 hr DVM Results**: Number of kind 6000-6999 events created in the last 24 hours
- **1 week DVM Requests**: Number of kind 5000-5999 events created in the last week
- **1 week DVM Results**: Number of kind 6000-6999 events created in the last week
- **Unique Users of DVMs**: Number of unique pubkeys of kind 5000-5999 events
- **Total Sats Paid to DVMs**:
- This is an estimate.
- This value is likely a lower bound as it does not take into consideration subscriptions paid to DVMs
- This is calculated by counting the values of all invoices where:
- A DVM published a kind 7000 event requesting payment and containing an invoice
- The DVM later provided a DVM Result for the same job for which it requested payment.
- The assumption is that the invoice was paid, otherwise the DVM would not have done the work
- Note that because there are multiple ways to pay a DVM such as lightning invoices, ecash, and subscriptions, there is no guaranteed way to know whether a DVM has been paid. Additionally, there is no way to know that a DVM completed the job because some DVMs may not publish a final result event and instead send the user a DM or take some other kind of action.
### Recent Requests:
This page shows the most recent 3 events per kind, sorted by created date. You should always be able to find the last 3 events here of all DVM kinds.
### DVM Browser:
This page will either show a profile of a specific DVM, or when no DVM is given in the url, it will show a table of all DVMs with some high level stats. Users can click on a DVM in the table to load the DVM specific page.
### Kind Browser:
This page will either show data on a specific kind including all DVMs that have performed jobs of that kind, or when no kind is given, it will show a table summarizing activity across all Kinds.
### Debug:
This page shows the graph based visualization of all events, users, and DVMs involved in a single job as well as a table of all events in order from oldest to newest. When no event is given, this page shows the 200 most recent events where the user can click on an event in order to debug that job. The graph-based visualization allows the user to zoom in and out and move around the graph, as well as double click on any node in the graph (except invoices) to open up that event, user, or dvm in a new page.
### Playground:
This page is currently under development and may not work at the moment. If it does work, in the current state you can login with NIP-07 extension and broadcast a 5050 event with some text and then the page will show you events from DVMs. This page will be used to interact with DVMs live. A current good alternative to this feature, for some but not all kinds, is [https://vendata.io/](https://vendata.io/).
## Looking to the Future
I originally built DVMDash out of Fear-of-Missing-Out (FOMO); I wanted to make AI systems that were comprised of DVMs but my day job was taking up a lot of my time. I needed to know when someone was performing a new task or launching a new AI or Nostr tool!
I have a long list of DVMs and Agents I hope to build and I needed DVMDash to help me do it; I hope it helps you achieve your goals with Nostr, DVMs, and even AI. To this end, I wish for this tool to be useful to others, so if you would like a feature, please [submit a git issue here](https://github.com/dtdannen/dvmdash/issues/new) or _note_ me on Nostr!
### Immediate Next Steps:
- Refactoring code and removing code that is no longer used
- Improve documentation to run the project locally
- Adding a metric for number of encrypted requests
- Adding a metric for number of encrypted results
### Long Term Goals:
- Add more metrics based on community feedback
- Add plots showing metrics over time
- Add support for showing a multi-dvm chain in the graph based visualizer
- Add a real-time mode where the pages will auto update (currently the user must refresh the page)
- ... Add support for user requested features!
## Acknowledgements
There are some fantastic people working in the DVM space right now. Thank you to nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet for making python bindings for nostr_sdk and for the recent asyncio upgrades! Thank you to nostr:npub1nxa4tywfz9nqp7z9zp7nr7d4nchhclsf58lcqt5y782rmf2hefjquaa6q8 for answering lots of questions about DVMs and for making the nostrdvm library. Thank you to nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft for making the original DVM NIP and [vendata.io](https://vendata.io/) which I use all the time for testing!
P.S. I rushed to get this out in time for Nostriga 2024; code refactoring will be coming :)
-
Objavte, ako avatari a pseudonymné identity ovplyvňujú riadenie kryptokomunít a decentralizovaných organizácií (DAOs). V tejto prednáške sa zameriame na praktické fungovanie decentralizovaného rozhodovania, vytváranie a správu avatarových profilov, a ich rolu v online reputačných systémoch. Naučíte sa, ako si vytvoriť efektívny pseudonymný profil, zapojiť sa do rôznych krypto projektov a využiť svoje aktivity na zarábanie kryptomien. Preskúmame aj príklady úspešných projektov a stratégie, ktoré vám pomôžu zorientovať sa a uspieť v dynamickom svete decentralizovaných komunít.
-
Jan Kolčák pochádza zo stredného Slovenska a vystupuje pod umeleckým menom Deepologic. Hudbe sa venuje už viac než 10 rokov. Začínal ako DJ, ktorý s obľubou mixoval klubovú hudbu v štýloch deep-tech a afrohouse. Stále ho ťahalo tvoriť vlastnú hudbu, a preto sa začal vzdelávať v oblasti tvorby elektronickej hudby. Nakoniec vydal svoje prvé EP s názvom "Rezonancie". Učenie je pre neho celoživotný proces, a preto sa neustále zdokonaľuje v oblasti zvuku a kompozície, aby jeho skladby boli kvalitné na posluch aj v klube.
V roku 2023 si založil vlastnú značku EarsDeep Records, kde dáva príležitosť začínajúcim producentom. Jeho značku podporujú aj etablované mená slovenskej alternatívnej elektronickej scény. Jeho prioritou je sloboda a neškatulkovanie. Ako sa hovorí v jednej klasickej deephouseovej skladbe: "We are all equal in the house of deep." So slobodou ide ruka v ruke aj láska k novým technológiám, Bitcoinu a schopnosť udržať si v digitálnom svete prehľad, odstup a anonymitu.
V súčasnosti ďalej produkuje vlastnú hudbu, venuje sa DJingu a vedie podcast, kde zverejňuje svoje mixované sety. Na Lunarpunk festivale bude hrať DJ set tvorený vlastnou produkciou, ale aj skladby, ktoré sú blízke jeho srdcu.
[Podcast](https://fountain.fm/show/eYFu6V2SUlN4vC5qBKFk)
[Bandcamp](https://earsdeep.bandcamp.com/)
[Punk Nostr website](https://earsdeep-records.npub.pro/) alebo nprofile1qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qy88wumn8ghj7mn0wvhxcmmv9uq3xamnwvaz7tmsw4e8qmr9wpskwtn9wvhsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qyg8wumn8ghj7mn0wd68ytnddakj7qghwaehxw309aex2mrp0yh8qunfd4skctnwv46z7qpqguvns4ld8k2f3sugel055w7eq8zeewq7mp6w2stpnt6j75z60z3swy7h05
-
Workshop je zameraný pre všetkých, ktorí sa potýkajú s vysvetľovaním Bitcoinu svojej rodine, kamarátom, partnerom alebo kolegom. Pri námietkach z druhej strany väčšinou ideme do protiútoku a snažíme sa vytiahnuť tie najlepšie argumenty. Na tomto workshope vás naučím nový prístup k zvládaniu námietok a vyskúšate si ho aj v praxi. Know-how je aplikovateľné nie len na komunikáciu Bitcoinu ale aj pre zlepšenie vzťahov, pri výchove detí a celkovo pre lepší osobný život.
-
Ak ste v Bitcoine už nejaký ten rok, možno máte pocit, že už všetkému rozumiete a že vás nič neprekvapí. Viete čo je to peňaženka, čo je to seed a čo adresa, možno dokonca aj čo je to sha256. Ste si istí? Táto prednáška sa vám to pokúsi vyvrátiť. 🙂
-
Bojovať s rakovinou metabolickou metódou znamená použiť metabolizmus tela proti rakovine. Riadenie cukru a ketónov v krvi stravou a pohybom, časovanie rôznych typov cvičení, včasná kombinácia klasickej onko-liečby a hladovania. Ktoré vitamíny a suplementy prijímam a ktorým sa napríklad vyhýbam dajúc na rady mojej dietologičky z USA Miriam (ktorá sa špecializuje na rakovinu).
Hovori sa, že čo nemeriame, neriadime ... Ja som meral, veľa a dlho ... aj grafy budú ... aj sranda bude, hádam ... 😉
-
Predikčné trhy predstavujú praktický spôsob, ako môžeme nahliadnuť do budúcnosti bez nutnosti spoliehať sa na tradičné, často nepresné metódy, ako je veštenie z kávových zrniek. V prezentácii sa ponoríme do histórie a vývoja predikčných trhov, a popíšeme aký vplyv mali a majú na dostupnosť a kvalitu informácií pre širokú verejnosť, a ako menia trh s týmito informáciami. Pozrieme sa aj na to, ako tieto trhy umožňujú obyčajným ľuďom prístup k spoľahlivým predpovediam a ako môžu prispieť k lepšiemu rozhodovaniu v rôznych oblastiach života.
-
AI hype vnímame asi všetci okolo nás — už takmer každá appka ponúka nejakú “AI fíčuru”, AI startupy raisujú stovky miliónov a Európa ako obvykle pracuje na regulovaní a našej ochrane pred nebezpečím umelej inteligencie. Pomaly sa ale ukazuje “ovocie” spojenia umelej inteligencie a človeka, kedy mnohí ľudia reportujú signifikantné zvýšenie produktivity v práci ako aj kreatívnych aktivitách (aj napriek tomu, že mnohí hardcore kreatívci by každého pri spomenutí skratky “AI” najradšej upálili). V prvej polovici prednášky sa pozrieme na to, akými rôznymi spôsobmi nám vie byť AI nápomocná, či už v práci alebo osobnom živote.
Umelé neuróny nám už vyskakujú pomaly aj z ovsených vločiek, no to ako sa k nám dostávajú sa veľmi líši. Hlavne v tom, či ich poskytujú firmy v zatvorených alebo open-source modeloch. V druhej polovici prednášky sa pozrieme na boom okolo otvorených AI modelov a ako ich vieme využiť.
-
Čo vznikne keď spojíš hru SNAKE zo starej Nokie 3310 a Bitcoin? - hra [Chain Duel](https://www.youtube.com/watch?v=5hCI2MzxOzE)!
Jedna z najlepších implementácií funkcionality Lightning Networku a gamingu vo svete Bitcoinu.
Vyskúšať si ju môžete s kamošmi [na tomto odkaze](https://game.chainduel.net/). Na stránke nájdeš aj základné pravidlá hry avšak odporúčame pravidlá pochopiť [aj priamo hraním](https://game.chainduel.net/gamemenu)
Chain Duel si získava hromady fanúšikov po bitcoinových konferenciách po celom svete a práve na Lunarpunk festival ho prinesieme tiež.
Multiplayer 1v1 hra, kde nejde o náhodu, ale skill, vás dostane. Poďte si zmerať sily s ďalšími bitcoinermi a vyhrať okrem samotných satoshi rôzne iné ceny.
Príďte sa zúčastniť prvého oficiálneho Chain Duel turnaja na Slovensku!
Pre účasť na turnaji je [potrebná registrácia dopredu](https://docs.google.com/forms/d/e/1FAIpQLScq96a-zM2i9FCkd3W3haNVcdKFTbPkXObNDh4vJwbmADsb0w/viewform).
-
Co se nomádská rodina již 3 roky utíkající před kontrolou naučila o kontrole samotné? Co je to vlastně svoboda? Může koexistovat se strachem? S konfliktem? Zkusme na chvíli zapomenout na daně, policii a stát a pohlédnout na svobodu i mimo hranice společenských ideologií. Zkusme namísto hledání dalších odpovědí zjistit, zda se ještě někde neukrývají nové otázky. Možná to bude trochu ezo.
Karel provozuje již přes 3 roky se svou ženou, dvěmi dětmi a jedním psem minimalistický život v obytné dodávce. Na cestách spolu začali tvořit youtubový kanál "[Karel od Martiny](https://www.youtube.com/@KarelodMartiny)" o svobodě, nomádství, anarchii, rodičovství, drogách a dalších normálních věcech.
Nájdete ho aj [na nostr](nostr:npub1y2se87uxc7fa0aenfqfx5hl9t2u2fjt4sp0tctlcr0efpauqtalqxfvr89).
-
## Bienvenide a Nostr!
**Introduccíon**
Es tu primera vez aqui en Nostr? Bienvenides! Nostr es un acrónimo raro para "Notes and Other Stuff Transmitted by Relays" on un solo objetivo; resistirse a la censura. Una alternativa a las redes sociales tradicionales, comunicaciónes, blogging, streaming, podcasting, y feventualmente el correo electronico (en fase de desarrollo) con características descentralizadas que te capacita, usario. Jamas seras molestado por un anuncio, capturado por una entidad centralizada o algoritmo que te monetiza.
Permítame ser su anfitrión! Soy Onigiri! Yo estoy explorando el mundo de Nostr, un protocolo de comunicacíon decentralizada. Yo escribo sobre las herramientas y los desarolladores increíbles de Nostr que dan vida a esta reino.
![](https://image.nostr.build/130e25ce8e83136e69732b6b37e503541fbd82b7598d58ba64e504e19402d297.jpg)
## Bienvenides a Nostr Wonderland
Estas a punto de entrar a un otro mundo digtal que te hará explotar tu mente de todas las aplicaciones descentralizadas, clientes, sitios que puedes utilizar. Nunca volverás a ver a las comunicaciones ni a las redes sociales de la mesma manera. Todo gracias al carácter criptográfico de nostr, inpirado por la tecnología "blockchain". Cada usario, cuando crean una cuenta en Nostr, recibe un par de llaves: una privada y una publico. Estos son las llaves de tu propio reino. Lo que escribes, cantes, grabes, lo que creas - todo te pertenece.
![](https://image.nostr.build/6f262291ddd72e45350d360f6e45fc9dd38740074de000ff4e48924bb838bf9c.jpg)
### Unos llaves de Oro y Plata
Mi amigo y yo llamamos a esto "identidad mediante cifrado" porque tu identidad es cifrado. Tu puedes compartir tu llave de plata "npub" a otros usarios para conectar y seguir. Utiliza tu llave de oro "nsec" para accedar a tu cuenta y exponerte a muchas aplicaciones. Mantenga la llave a buen recaudo en todo momento. Ya no hay razor para estar enjaulado por los terminos de plataformas sociales nunca más.
Onigirl
```
npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
```
### Todavia No tienes un cliente? Seleccione la mejor opción.
Encuentra la aplicación adecuada para ti! Utilice su clave de oro "nsec" para acceder a estas herramientas maravillosas. También puedes visit a esta pagina a ver a todas las aplicaciones. Antes de pegar tu llave de oro en muchas aplicaciones, considera un "signer" (firmante) para los sitios web 3. Por favor, mire la siguiente imagen para más detalles. Consulte también la leyenda.
![](https://image.nostr.build/0db9d3efe15c3a02d40c3a5e65e7f6a64d50c6738dfe83c37330194d1b85059f.jpg)
### Get a Signer extension via chrome webstore
Un firmante (o "signer" en inglés) es una extensión del navegador web. Nos2x and NostrConnect son extensiónes ampliamente aceptado para aceder a Nostr. Esto simplifica el proceso de aceder a sitios "web 3". En lugar de copiar y pegar la clave oro "nsec" cada vez, la mantienes guardado en la extensión y le des permiso para aceder a Nostr.
![](https://image.nostr.build/ff751710a0e3c808355a8d03721b633da7a8926a16f7e91b6e5d451fecc5d887.jpg)
### 👉⚡⚡Obtén una billetera Bitcoin lightning para enviar/recibir Zaps⚡⚡ (Esto es opcional)
![](https://image.nostr.build/cde06896707080ed92fff2cce1c4dd50b0095f371b8d2762fce7eeebf3696cd4.jpg)
Aqui en Nostr, utilizamos la red Lightning de Bitcoin (L2). Nesitaras una cartera lightning para enviar y recibir Satoshis, la denominacion mas chiquita de un Bitcoin. (0.000000001 BTC) Los "zaps" son un tipo de micropago en Nostr. Si te gusta el contenido de un usario, es norma dejarle una propina en la forma de un ¨zap". Por ejemplo, si te gusta este contenido, tu me puedes hacer "zap" con Satoshis para recompensar mi trabajo. Pero apenas llegaste, as que todavia no tienes una cartera. No se preocupe, puedo ayudar en eso!
"[Stacker.News](https://stacker.news/r/Hamstr)" es una plataforma donde los usarios pueden ganar SATS por publicar articulos y interactuar con otros.
![](https://image.nostr.build/91198aba183890b479629ee346ba1aa3c9565ab863c665d91d24b0260917a131.jpg)
Stacker.News es el lugar mas facil para recibir una direccion de cartera Bitcoin Lightning.
1. Acedese con su extensión firmante "signer" - Nos2x or NostrConnect - hace click en tu perfil, un codigo de letras y numeros en la mano superior derecha. Veás algo como esto
![](https://image.nostr.build/9d03196a588fc2936538803ae27a2cf057b1efaf4eff5558b9612b9fabc7bd31.png)
2. Haga clic en "edit" y elija un nombre que te guste. Se puede cambiar si deseas en el futuro.
![](https://image.nostr.build/34e9b0088dd5c12d4b9893e7818f0bee59f918699103ec0c45c01c384eb94164.jpg)
3. Haga clic en "save"
4. Crea una biografía y la comunidad SN son muy acogedora. Te mandarán satoshi para darte la bienvenida.
5. Tu nueva direccion de cartera Bitcoin Lightning aparecerá asi
![](https://image.nostr.build/8a3f373cedcebd967aef3ba0fba1504446f3e2b4bb497f7405a05ef3e145aca2.png)
**^^No le mandas "zaps" a esta direccion; es puramente con fines educativos.**
6. Con tu **Nueva** dirección de monedero Bitcoin Lightning puedes ponerla en cualquier cliente o app de tu elección. Para ello, ve a tu **página de perfil** y bajo la dirección de tu monedero en "**Dirección Lightning**", introduce tu nueva dirección y pulsa **"guardar "** y ya está. Enhorabuena.
👉✨Con el tiempo, es posible que desee pasar a las opciones de auto-custodia y tal vez incluso considerar la posibilidad de auto-alojar su propio nodo LN para una mejor privacidad. La buena noticia es que stacker.news tambien está dejando de ser una cartera custodio.
⭐NIP-05-identidad DNS⭐
Al igual que en Twitter, una marca de verificación es para mostrar que eres del mismo jardín "como un humano", y no un atípico como una mala hierba o, "bot". Pero no de la forma nefasta en que lo hacen las grandes tecnológicas. En el país de las maravillas de Nostr, esto te permite asignar tu llave de plata, "npub", a un identificador DNS. Una vez verificado, puedes gritar para anunciar tu nueva residencia Nostr para compartir.
✨Hay un montón de opciones, pero si has seguido los pasos, esto se vuelve extremadamente fácil.
👉✅¡Haz clic en tu **"Perfil "**, luego en **"Configuración "**, desplázate hasta la parte inferior y pega tu *clave Silver*, **"npub!"** y haz clic en **"Guardar "** y ¡listo! Utiliza tu monedero relámpago de Stacker.news como tu NIP-05. ¡¡¡Enhorabuena!!! ¡Ya estás verificado! Dale unas horas y cuando uses tu cliente **"principal "** deberías ver una marca de verificación.
### Nostr, el infonformista de los servidores.
![](https://image.nostr.build/2cf428a02cfc94365150112e12e541ff338390faf0718ed65957098123ca5016.jpg)
En lugar de utilizar una única instancia o un servidor centralizado, Nostr está construido para que varias bases de datos intercambien mensajes mediante "relés". Los relés, que son neutrales y no discriminatorios, almacenan y difunden mensajes públicos en la red Nostr. Transmiten mensajes a todos los demás clientes conectados a ellos, asegurando las comunicaciones en la red descentralizada.
### ¡Mis amigos en Nostr te dan la bienvenida!
Bienvenida a la fiesta. ¿Le apetece un té?🍵
![](https://image.nostr.build/03c85e38f0b8a5ed0721281ae23aca4f7d217bef5b12c8d8c2c127c6bb3189f6.jpg)
### ¡Hay mucho mas!
Esto es la punta del iceberg. Síguenme mientras continúo explorando nuevas tierras y a los desarolladores, los caballeres que potencioan este ecosistema. Encuéntrame aquí para mas contenido como este y comparten con otros usarios de nostr. Conozca a los caballeres que luchan por freedomTech (la tecnología de libertad) en Nostr y a los proyectos a los que contribuyen para hacerla realidad.💋
Onigirl
@npub18jvyjwpmm65g8v9azmlvu8knd5m7xlxau08y8vt75n53jtkpz2ys6mqqu3
----
🧡😻Esta guía ha sido cuidadosamente traducida por miggymofongo
Puede seguirla aquí.
@npub1ajt9gp0prf4xrp4j07j9rghlcyukahncs0fw5ywr977jccued9nqrcc0cs
sitio [web](https://miguelalmodo.com/)
-
# test
## hello world
**haha**
lol
```js
const y = (m * x) + b;
```
-
Was the Trump ‘assassination attempt’ a staged event that he was in on? Was it an inside job by the secret service seeking to eliminate the number one enemy of the deep state? Or was Thomas Crooks simply competent (and lucky) enough to pull off a lone-wolf attack on the 45th president of the USA?
Whatever the truth is, which will surely come out over time, the ‘shooting’ has dramatically altered the course of American politics.
The most obvious narrative shift has been Trump’s new-found God-like status among the American right. The image of Trump holding his fist in the air, right ear bloodied, while urging Americans to “fight”, symbolises this. His long-time friend and UFC chairman Dana White said in the aftermath of the ‘shooting’: “He [Trump] is one of the toughest, most resilient human beings that I have ever met in my entire life….This guy is the legitimate ultimate American badass of all-time!” In addition, a section of Trump supporters turned up to the recent Republican National Convention [wearing fake bandages](https://www.bbc.co.uk/news/videos/cldy39vpv4qo) on their ears in a humorous yet sincere show of solidarity with their leader. These are just two examples among many of the relentless outpouring of adulation that Trump is receiving. He has become a martyr without having to die.
The Trump fist raise image and resulting strongman narrative is not what I will focus on, but it does provide crucial context for what I am going to say. Instead, I will look at the farcical performance of the secret service, which did not go unnoticed by social media users. Specifically, that of the female agents, and the resulting backlash against the perceived failures of DEI (diversity, equity and inclusion) within the US government. I will also look at why this narrative is being pushed by powerful players of the alternative-media industrial complex.
First, [footage emerged](https://www.youtube.com/watch?v=tAtu1YU-PHo) of one female agent who struggled to holster her firearm, while looking completely disorientated, as Trump, who could have still been in danger, fled the scene in a blacked out SUV. It was reminiscent of Fredo Corleone’s bumbling efforts to save his father Vito, who faced a failed assassination attempt of his own in The Godfather 2.
Second, there is the [ridiculous still image](https://www.google.com/search?sca_esv=5b98f86993edd4ca&sca_upv=1&rlz=1C5CHFA_enGB1069GB1069&q=trump+female+secret+service&udm=2&fbs=AEQNm0Aa4sjWe7Rqy32pFwRj0UkWd8nbOJfsBGGB5IQQO6L3J_86uWOeqwdnV0yaSF-x2joQcoZ-0Q2Udkt2zEybT7HdcghX_cULItgDQ-ic0tx97HU0om4eiEoFQ7LkCUAIN0k5ckfuXbaYID2cdV_OmGsEy_vSEauNj1_Mmv2J6NjBnVEvjRAhAzO6zw58Qt0lVtZUf36m&sa=X&ved=2ahUKEwj8_NWx5riHAxX5bEEAHROMDHIQtKgLegQICxAB&biw=1600&bih=781&dpr=1.8#vhid=k3BFXz5a0rU7wM&vssid=mosaic) of the female secret service agent standing in front of Trump by the podium in order to provide cover for the former president amid a possible active shooter situation. The image is ludicrous because the female agent is not nearly as tall as Trump, and so his head, including his ear that was just ‘clipped’ by a 'bullet', remains completely exposed.
Third, the US secret service director Kimberly Cheatle, a woman, is [ultimately responsible](https://www.nytimes.com/2024/07/17/us/politics/kimberly-cheatle-secret-service.html) for the ‘near-assassination’ of Donald Trump and the seemingly unlimited ‘mistakes’ made by the agency on the day. For example, Cheatle did not have a secret service gunmen on the roof used by Thomas Crooks because it was “too sloped”. An explanation that was shown to be comically bad in the immediate aftermath of the ‘shooting’ when photos emerged of cleaners standing on the roof to clear Crooks’ ‘blood’ away.
The backlash against the female agents in question has been pretty relentless from the American right. But not just from anonymous MAGA social media users. Some of the key drivers of political narratives on the right have honed in on this issue and more generally against the perceived failings of DEI.
“There should not be any women in the Secret Service. These are supposed to be the very best, and none of the very best at this job are women,” [said right-leaning activist Matt Walsh](https://x.com/MattWalshBlog/status/1812492702493057338?lang=en) in a direct response the footage of the female secret service agents. Walsh works for The Daily Wire and rose to global prominence on the back of his documentary “What Is A Women?”. In other words, he is a key figure within the US culture war, who has a keen interest in discussions around gender.
Then there is Andrew Tate. One could have predicted where the former kickboxer was going to stand on this issue. [Tate posted a video to social media](https://www.youtube.com/watch?v=cpw5Sm46f-4&t=44s) in which he lambasted Kimberly Cheatle and the other female secret service agents in a visibly heated manner. “There’s not a female alive who’s ever going to jump in front of a bullet for anybody. She’s gonna piss her panties and hide.” Tate carries on the rest of the video in the same vain, including a statement which in my view is a big clue about the narrative he is trying to push.
“Society as a whole will be better off if we return to our [gender] roles.”
Tate wants the secret service and the military to go back to being made up of men who are selected based solely on competence. After years of military aged males being 'disenfranchised' by the institutions they once felt proud to represent - the pendulum may be about to swing back.
At this point, it is worth mentioning that I believe both Matt Walsh and Andrew Tate are intelligence assets who push agendas to the wider public on behalf of the deep state, [as Miri AF explains here](https://miriaf.co.uk/the-most-important-question-about-andrew-tate-no-one-is-asking/). It is also worth considering that these women are akin to actors who are fulfilling their roles as incompetent, further driving the narrative of Walsh and Tate. Their incompetence is not necessarily because they are women - they might just be acting like they are incapable because they are women. It sounds overly conspiratorial, but in my view it is plausible.
Okay, let’s continue down this ‘conspiracy theory’ rabbit hole for a little longer. It is widely understood that intelligence agencies use terror attacks to further their insidious agendas. A clear example of this is how George Bush’s government, in conjunction with the media and the intelligence apparatus, weaponised the fear brought about by 9/11 in order to invade Iraq.
So, what is the goal of Tate/Walsh and the deep state (and possibly Trump too) in pushing back against DEI and making the secret service/military a place that ‘respects’ male competence again? The same military whose commander-in-chief has God-like status, is the ‘ultimate American badass of all time’, and who ‘literally’ just ‘took a bullet’ for his country. The same military that is currently in escalating proxy wars vs Russia and Iran - neither of which look like ending anytime soon. Are American military aged males being influenced to go and fight for their country in WW3 after years of being gaslighted?
At this point I want to state that I do not claim to know this for sure. I am stating a theory. I am asking questions that I believe need to be asked.
But I will leave you with this quote by a World War Two veteran that has been doing the rounds on social media recently.
“If president Trump was commander-in-chief I would go back to re-enlist today.” [Sgt. Bill Peril, 99, WW2 veteran. ](https://www.independent.co.uk/news/world/americas/us-politics/rnc-trump-world-war-veteran-b2581928.html)
-
<pre class="ql-syntax" spellcheck="false"># Bitcoin: A Peer-to-Peer Electronic Cash System
Satoshi Nakamoto
[satoshin@gmx.com](mailto:satoshin@gmx.com)
www.bitcoin.org
**Abstract.** A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.
## 1. Introduction
Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model. Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for non-reversible services. With the possibility of reversal, the need for trust spreads. Merchants must be wary of their customers, hassling them for more information than they would otherwise need. A certain percentage of fraud is accepted as unavoidable. These costs and payment uncertainties can be avoided in person by using physical currency, but no mechanism exists to make payments over a communications channel without a trusted party.
What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party. Transactions that are computationally impractical to reverse would protect sellers from fraud, and routine escrow mechanisms could easily be implemented to protect buyers. In this paper, we propose a solution to the double-spending problem using a peer-to-peer distributed timestamp server to generate computational proof of the chronological order of transactions. The system is secure as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes.
## 2. Transactions
We define an electronic coin as a chain of digital signatures. Each owner transfers the coin to the next by digitally signing a hash of the previous transaction and the public key of the next owner and adding these to the end of the coin. A payee can verify the signatures to verify the chain of ownership.
```
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ │ │ │ │ │
│ Transaction │ │ Transaction │ │ Transaction │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ Owner 1's │ │ │ │ Owner 2's │ │ │ │ Owner 3's │ │
│ │ Public Key │ │ │ │ Public Key │ │ │ │ Public Key │ │
│ └───────┬─────┘ │ │ └───────┬─────┘ │ │ └───────┬─────┘ │
│ │ . │ │ │ . │ │ │ │
──────┼─────────┐ │ . ├───────────────┼─────────┐ │ . ├──────────────┼─────────┐ │ │
│ │ │ . │ │ │ │ . │ │ │ │ │
│ ┌──▼─▼──┐ . │ │ ┌──▼─▼──┐ . │ │ ┌──▼─▼──┐ │
│ │ Hash │ . │ │ │ Hash │ . │ │ │ Hash │ │
│ └───┬───┘ . │ Verify │ └───┬───┘ . │ Verify │ └───┬───┘ │
│ │ ............................ │ ........................... │ │
│ │ │ │ │ │ │ │ │ │ │
│ ┌──────▼──────┐ │ │ ┌─▼────▼──────┐ │ │ ┌─▼────▼──────┐ │
│ │ Owner 0's │ │ Sign │ │ Owner 1's │ │ Sign │ │ Owner 2's │ │
│ │ Signature │ │ ...........─►│ Signature │ │ ...........─►│ Signature │ │
│ └─────────────┘ │ . │ └─────────────┘ │ . │ └─────────────┘ │
│ │ . │ │ . │ │
└─────────────────────┘ . └─────────────────────┘ . └─────────────────────┘
. .
┌─────────────┐ . ┌─────────────┐ . ┌─────────────┐
│ Owner 1's │........... │ Owner 2's │.......... │ Owner 3's │
│ Private Key │ │ Private Key │ │ Private Key │
└─────────────┘ └─────────────┘ └─────────────┘
```
The problem of course is the payee can't verify that one of the owners did not double-spend the coin. A common solution is to introduce a trusted central authority, or mint, that checks every transaction for double spending. After each transaction, the coin must be returned to the mint to issue a new coin, and only coins issued directly from the mint are trusted not to be double-spent. The problem with this solution is that the fate of the entire money system depends on the company running the mint, with every transaction having to go through them, just like a bank.
We need a way for the payee to know that the previous owners did not sign any earlier transactions. For our purposes, the earliest transaction is the one that counts, so we don't care about later attempts to double-spend. The only way to confirm the absence of a transaction is to be aware of all transactions. In the mint based model, the mint was aware of all transactions and decided which arrived first. To accomplish this without a trusted party, transactions must be publicly announced [^1], and we need a system for participants to agree on a single history of the order in which they were received. The payee needs proof that at the time of each transaction, the majority of nodes agreed it was the first received.
## 3. Timestamp Server
The solution we propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [^2] [^3] [^4] [^5]. The timestamp proves that the data must have existed at the time, obviously, in order to get into the hash. Each timestamp includes the previous timestamp in its hash, forming a chain, with each additional timestamp reinforcing the ones before it.
```
┌──────┐ ┌──────┐
────────────►│ ├───────────────────────►│ ├───────────────────►
│ Hash │ │ Hash │
┌───►│ │ ┌───►│ │
│ └──────┘ │ └──────┘
│ │
┌┴──────────────────────────┐ ┌┴──────────────────────────┐
│ Block │ │ Block │
│ ┌─────┐ ┌─────┐ ┌─────┐ │ │ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Item │ │Item │ │... │ │ │ │Item │ │Item │ │... │ │
│ └─────┘ └─────┘ └─────┘ │ │ └─────┘ └─────┘ └─────┘ │
│ │ │ │
└───────────────────────────┘ └───────────────────────────┘
```
## 4. Proof-of-Work
To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof-of-work system similar to Adam Back's Hashcash [^6], rather than newspaper or Usenet posts. The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash.
For our timestamp network, we implement the proof-of-work by incrementing a nonce in the block until a value is found that gives the block's hash the required zero bits. Once the CPU effort has been expended to make it satisfy the proof-of-work, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing all the blocks after it.
```
┌────────────────────────────────────────┐ ┌────────────────────────────────────────┐
│ Block │ │ Block │
│ ┌──────────────────┐ ┌──────────────┐ │ │ ┌──────────────────┐ ┌──────────────┐ │
───────┼─►│ Prev Hash │ │ Nonce │ ├──────┼─►│ Prev Hash │ │ Nonce │ │
│ └──────────────────┘ └──────────────┘ │ │ └──────────────────┘ └──────────────┘ │
│ │ │ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Tx │ │ Tx │ │ ... │ │ │ │ Tx │ │ Tx │ │ ... │ │
│ └──────────┘ └──────────┘ └──────────┘ │ │ └──────────┘ └──────────┘ └──────────┘ │
│ │ │ │
└────────────────────────────────────────┘ └────────────────────────────────────────┘
```
The proof-of-work also solves the problem of determining representation in majority decision making. If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it. If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes. We will show later that the probability of a slower attacker catching up diminishes exponentially as subsequent blocks are added.
To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they're generated too fast, the difficulty increases.
## 5. Network
The steps to run the network are as follows:
1. New transactions are broadcast to all nodes.
2. Each node collects new transactions into a block.
3. Each node works on finding a difficult proof-of-work for its block.
4. When a node finds a proof-of-work, it broadcasts the block to all nodes.
5. Nodes accept the block only if all transactions in it are valid and not already spent.
6. Nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash.
Nodes always consider the longest chain to be the correct one and will keep working on extending it. If two nodes broadcast different versions of the next block simultaneously, some nodes may receive one or the other first. In that case, they work on the first one they received, but save the other branch in case it becomes longer. The tie will be broken when the next proof-of-work is found and one branch becomes longer; the nodes that were working on the other branch will then switch to the longer one.
New transaction broadcasts do not necessarily need to reach all nodes. As long as they reach many nodes, they will get into a block before long. Block broadcasts are also tolerant of dropped messages. If a node does not receive a block, it will request it when it receives the next block and realizes it missed one.
## 6. Incentive
By convention, the first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. This adds an incentive for nodes to support the network, and provides a way to initially distribute coins into circulation, since there is no central authority to issue them. The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU time and electricity that is expended.
The incentive can also be funded with transaction fees. If the output value of a transaction is less than its input value, the difference is a transaction fee that is added to the incentive value of the block containing the transaction. Once a predetermined number of coins have entered circulation, the incentive can transition entirely to transaction fees and be completely inflation free.
The incentive may help encourage nodes to stay honest. If a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or using it to generate new coins. He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth.
## 7. Reclaiming Disk Space
Once the latest transaction in a coin is buried under enough blocks, the spent transactions before it can be discarded to save disk space. To facilitate this without breaking the block's hash, transactions are hashed in a Merkle Tree [^7] [^2] [^5], with only the root included in the block's hash. Old blocks can then be compacted by stubbing off branches of the tree. The interior hashes do not need to be stored.
```
┌──────────────────────────────────────────┐ ┌──────────────────────────────────────────┐
│ │ │ │
│ Block ┌─────────────────────────────┐ │ │ Block ┌─────────────────────────────┐ │
│ │ Block Header (Block Hash) │ │ │ │ Block Header (Block Hash) │ │
│ │ ┌────────────┐ ┌─────────┐ │ │ │ │ ┌────────────┐ ┌─────────┐ │ │
│ │ │ Prev Hash │ │ Nonce │ │ │ │ │ │ Prev Hash │ │ Nonce │ │ │
│ │ └────────────┘ └─────────┘ │ │ │ │ └────────────┘ └─────────┘ │ │
│ │ │ │ │ │ │ │
│ │ ┌─────────────┐ │ │ │ │ ┌─────────────┐ │ │
│ │ │ Root Hash │ │ │ │ │ │ Root Hash │ │ │
│ │ └─────▲─▲─────┘ │ │ │ │ └─────▲─▲─────┘ │ │
│ │ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │ │
│ └───────────┼─┼───────────────┘ │ │ └───────────┼─┼───────────────┘ │
│ │ │ │ │ │ │ │
│ .......... │ │ .......... │ │ ┌────────┐ │ │ .......... │
│ . ─────┘ └─────. . │ │ │ ├────┘ └─────. . │
│ . Hash01 . . Hash23 . │ │ │ Hash01 │ . Hash23 . │
│ .▲.....▲.. .▲.....▲.. │ │ │ │ .▲.....▲.. │
│ │ │ │ │ │ │ └────────┘ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ .....│.. ..│..... .....│.. ..│..... │ │ ┌────┴─┐ ..│..... │
│ . . . . . . . . │ │ │ │ . . │
│ .Hash0 . .Hash1 . .Hash2 . .Hash3 . │ │ │Hash2 │ .Hash3 . │
│ ...▲.... ...▲.... ...▲.... ...▲.... │ │ │ │ . . │
│ │ │ │ │ │ │ └──────┘ ...▲.... │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ ┌──┴───┐ ┌──┴───┐ ┌──┴───┐ ┌──┴───┐ │ │ ┌──┴───┐ │
│ │ Tx0 │ │ Tx1 │ │ Tx2 │ │ Tx3 │ │ │ │ Tx3 │ │
│ └──────┘ └──────┘ └──────┘ └──────┘ │ │ └──────┘ │
│ │ │ │
└──────────────────────────────────────────┘ └──────────────────────────────────────────┘
Transactions Hashed in a Merkle Tree After Pruning Tx0-2 from the Block
```
A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory.
## 8. Simplified Payment Verification
It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he's convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it's timestamped in. He can't check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirm the network has accepted it.
```
Longest Proof-of-Work Chain
┌────────────────────────────────────────┐ ┌────────────────────────────────────────┐ ┌────────────────────────────────────────┐
│ Block Header │ │ Block Header │ │ Block Header │
│ ┌──────────────────┐ ┌──────────────┐ │ │ ┌──────────────────┐ ┌──────────────┐ │ │ ┌──────────────────┐ ┌──────────────┐ │
───────┼─►│ Prev Hash │ │ Nonce │ ├──────┼─►│ Prev Hash │ │ Nonce │ ├───────┼─►│ Prev Hash │ │ Nonce │ ├────────►
│ └──────────────────┘ └──────────────┘ │ │ └──────────────────┘ └──────────────┘ │ │ └──────────────────┘ └──────────────┘ │
│ │ │ │ │ │
│ ┌───────────────────┐ │ │ ┌────────────────────┐ │ │ ┌───────────────────┐ │
│ │ Merkle Root │ │ │ │ Merkle Root │ │ │ │ Merkle Root │ │
│ └───────────────────┘ │ │ └────────▲─▲─────────┘ │ │ └───────────────────┘ │
│ │ │ │ │ │ │ │
└────────────────────────────────────────┘ └─────────────┼─┼────────────────────────┘ └────────────────────────────────────────┘
│ │
│ │
┌────────┐ │ │ ..........
│ ├────┘ └─────. .
│ Hash01 │ . Hash23 .
│ │ .▲.....▲..
└────────┘ │ │
│ │
│ │ Merkle Branch for Tx3
│ │
┌─────┴─┐ ..│.....
│ │ . .
│ Hash2 │ .Hash3 .
│ │ . .
└───────┘ ...▲....
│
│
┌───┴───┐
│ Tx3 │
└───────┘
```
As such, the verification is reliable as long as honest nodes control the network, but is more vulnerable if the network is overpowered by an attacker. While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker's fabricated transactions for as long as the attacker can continue to overpower the network. One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency. Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.
## 9. Combining and Splitting Value
Although it would be possible to handle coins individually, it would be unwieldy to make a separate transaction for every cent in a transfer. To allow value to be split and combined, transactions contain multiple inputs and outputs. Normally there will be either a single input from a larger previous transaction or multiple inputs combining smaller amounts, and at most two outputs: one for the payment, and one returning the change, if any, back to the sender.
```
┌──────────────────────┐
│ Transaction │
│ │
│ ┌─────┐ ┌─────┐ │
─────┼──►│ in │ │ out │ ──┼─────►
│ └─────┘ └─────┘ │
│ │
│ │
│ ┌─────┐ ┌─────┐ │
─────┼──►│ in │ │ ... │ ──┼─────►
│ └─────┘ └─────┘ │
│ │
│ │
│ ┌─────┐ │
─────┼──►│... │ │
│ └─────┘ │
│ │
└──────────────────────┘
```
It should be noted that fan-out, where a transaction depends on several transactions, and those transactions depend on many more, is not a problem here. There is never the need to extract a complete standalone copy of a transaction's history.
## 10. Privacy
The traditional banking model achieves a level of privacy by limiting access to information to the parties involved and the trusted third party. The necessity to announce all transactions publicly precludes this method, but privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous. The public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone. This is similar to the level of information released by stock exchanges, where the time and size of individual trades, the "tape", is made public, but without telling who the parties were.
```
Traditional Privacy Models │
┌─────────────┐ ┌──────────────┐ │ ┌────────┐
┌──────────────┐ ┌──────────────┐ │ Trusted │ │ │ │ │ │
│ Identities ├──┤ Transactions ├───►│ Third Party ├──►│ Counterparty │ │ │ Public │
└──────────────┘ └──────────────┘ │ │ │ │ │ │ │
└─────────────┘ └──────────────┘ │ └────────┘
│
New Privacy Model
┌────────┐
┌──────────────┐ │ ┌──────────────┐ │ │
│ Identities │ │ │ Transactions ├───►│ Public │
└──────────────┘ │ └──────────────┘ │ │
└────────┘
```
As an additional firewall, a new key pair should be used for each transaction to keep them from being linked to a common owner. Some linking is still unavoidable with multi-input transactions, which necessarily reveal that their inputs were owned by the same owner. The risk is that if the owner of a key is revealed, linking could reveal other transactions that belonged to the same owner.
## 11. Calculations
We consider the scenario of an attacker trying to generate an alternate chain faster than the honest chain. Even if this is accomplished, it does not throw the system open to arbitrary changes, such as creating value out of thin air or taking money that never belonged to the attacker. Nodes are not going to accept an invalid transaction as payment, and honest nodes will never accept a block containing them. An attacker can only try to change one of his own transactions to take back money he recently spent.
The race between the honest chain and an attacker chain can be characterized as a Binomial Random Walk. The success event is the honest chain being extended by one block, increasing its lead by +1, and the failure event is the attacker's chain being extended by one block, reducing the gap by -1.
The probability of an attacker catching up from a given deficit is analogous to a Gambler's Ruin problem. Suppose a gambler with unlimited credit starts at a deficit and plays potentially an infinite number of trials to try to reach breakeven. We can calculate the probability he ever reaches breakeven, or that an attacker ever catches up with the honest chain, as follows [^8]:
```plaintext
p = probability an honest node finds the next block<
q = probability the attacker finds the next block
q = probability the attacker will ever catch up from z blocks behind
``````
$$
qz =
\begin{cases}
1 & \text{if } p \leq q \\
\left(\frac{q}{p}\right) z & \text{if } p > q
\end{cases}
$$
Given our assumption that p > q, the probability drops exponentially as the number of blocks the attacker has to catch up with increases. With the odds against him, if he doesn't make a lucky lunge forward early on, his chances become vanishingly small as he falls further behind.
We now consider how long the recipient of a new transaction needs to wait before being sufficiently certain the sender can't change the transaction. We assume the sender is an attacker who wants to make the recipient believe he paid him for a while, then switch it to pay back to himself after some time has passed. The receiver will be alerted when that happens, but the sender hopes it will be too late.
The receiver generates a new key pair and gives the public key to the sender shortly before signing. This prevents the sender from preparing a chain of blocks ahead of time by working on it continuously until he is lucky enough to get far enough ahead, then executing the transaction at that moment. Once the transaction is sent, the dishonest sender starts working in secret on a parallel chain containing an alternate version of his transaction.
The recipient waits until the transaction has been added to a block and z blocks have been linked after it. He doesn't know the exact amount of progress the attacker has made, but assuming the honest blocks took the average expected time per block, the attacker's potential progress will be a Poisson distribution with expected value:
$$
\lambda = z\frac{q}{p}
$$
To get the probability the attacker could still catch up now, we multiply the Poisson density for each amount of progress he could have made by the probability he could catch up from that point:
$$
\sum_{k=0}^{\infty} \frac{\lambda^k e^{-\lambda}}{k!} \cdot \left\{
\begin{array}{cl}
\left(\frac{q}{p}\right)^{(z-k)} & \text{if } k \leq z \\
1 & \text{if } k > z
\end{array}
\right.
$$
Rearranging to avoid summing the infinite tail of the distribution...
$$
1 - \sum_{k=0}^{z} \frac{\lambda^k e^{-\lambda}}{k!} \left(1-\left(\frac{q}{p}\right)^{(z-k)}\right)
$$
Converting to C code...
```c
#include <math.h>
double AttackerSuccessProbability(double q, int z)
{
double p = 1.0 - q;
double lambda = z * (q / p);
double sum = 1.0;
int i, k;
for (k = 0; k <= z; k++)
{
double poisson = exp(-lambda);
for (i = 1; i <= k; i++)
poisson *= lambda / i;
sum -= poisson * (1 - pow(q / p, z - k));
}
return sum;
}
```
Running some results, we can see the probability drop off exponentially with z.
```plaintext
q=0.1
z=0 P=1.0000000
z=1 P=0.2045873
z=2 P=0.0509779
z=3 P=0.0131722
z=4 P=0.0034552
z=5 P=0.0009137
z=6 P=0.0002428
z=7 P=0.0000647
z=8 P=0.0000173
z=9 P=0.0000046
z=10 P=0.0000012
q=0.3
z=0 P=1.0000000
z=5 P=0.1773523
z=10 P=0.0416605
z=15 P=0.0101008
z=20 P=0.0024804
z=25 P=0.0006132
z=30 P=0.0001522
z=35 P=0.0000379
z=40 P=0.0000095
z=45 P=0.0000024
z=50 P=0.0000006
```
Solving for P less than 0.1%...
```plaintext
P < 0.001
q=0.10 z=5
q=0.15 z=8
q=0.20 z=11
q=0.25 z=15
q=0.30 z=24
q=0.35 z=41
q=0.40 z=89
q=0.45 z=340
```
## 12. Conclusion
We have proposed a system for electronic transactions without relying on trust. We started with the usual framework of coins made from digital signatures, which provides strong control of ownership, but is incomplete without a way to prevent double-spending. To solve this, we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power. The network is robust in its unstructured simplicity. Nodes work all at once with little coordination. They do not need to be identified, since messages are not routed to any particular place and only need to be delivered on a best effort basis. Nodes can leave and rejoin the network at will, accepting the proof-of-work chain as proof of what happened while they were gone. They vote with their CPU power, expressing their acceptance of valid blocks by working on extending them and rejecting invalid blocks by refusing to work on them. Any needed rules and incentives can be enforced with this consensus mechanism.
<br>
### References
---
[^1]: W. Dai, "b-money," http://www.weidai.com/bmoney.txt, 1998.
[^2]: H. Massias, X.S. Avila, and J.-J. Quisquater, "Design of a secure timestamping service with minimal
trust requirements," In 20th Symposium on Information Theory in the Benelux, May 1999.
[^3]: S. Haber, W.S. Stornetta, "How to time-stamp a digital document," In Journal of Cryptology, vol 3, no
2, pages 99-111, 1991.
[^4]: D. Bayer, S. Haber, W.S. Stornetta, "Improving the efficiency and reliability of digital time-stamping,"
In Sequences II: Methods in Communication, Security and Computer Science, pages 329-334, 1993.
[^5]: S. Haber, W.S. Stornetta, "Secure names for bit-strings," In Proceedings of the 4th ACM Conference
on Computer and Communications Security, pages 28-35, April 1997.
[^6]: A. Back, "Hashcash - a denial of service counter-measure,"
http://www.hashcash.org/papers/hashcash.pdf, 2002.
[^7]: R.C. Merkle, "Protocols for public key cryptosystems," In Proc. 1980 Symposium on Security and
Privacy, IEEE Computer Society, pages 122-133, April 1980.
[^8]: W. Feller, "An introduction to probability theory and its applications," 1957.
</pre><p><br></p>
-
Lístky na festival Lunarpunku sú už v predaji [na našom crowdfunding portáli](https://pay.cypherpunk.today/apps/maY3hxKArQxMpdyh5yCtT6UWMJm/crowdfund). V predaji sú dva typy lístkov - štandardný vstup a špeciálny vstup spolu s workshopom oranžového leta.
Neváhajte a zabezpečte si lístok, čím skôr to urobíte, tým bude festival lepší.
Platiť môžete Bitcoinom - Lightningom aj on-chain. Vaša vstupenka je e-mail adresa (neposielame potvrdzujúce e-maily, ak platba prešla, ste in).
[Kúpte si lístok](https://pay.cypherpunk.today/apps/maY3hxKArQxMpdyh5yCtT6UWMJm/crowdfund)
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
<iframe width="560" height="315" src="https://www.youtube.com/embed/WoEsme3nvRg" frameborder="0" allowfullscreen></iframe>
-
As topic says, is there a thing such as connecting to to many Nostr relays? What number are you pushing to many at or what should be the max if any?
originally posted at https://stacker.news/items/616313
-
*Yesterday's edition https://stacker.news/items/615090/r/Undisciplined*
Great conversation about default zaps two years ago. It seems we're destined to have this conversation with each wave of new stackers.
* * -
### July 21, 2023 📅
----
### 📝 `TOP POST`
**[When was/is/will be the GOLDEN age of Bitcoin?](https://stacker.news/items/212216/r/Undisciplined)**
*1818 sats \ 21 comments \ @shadowymartian \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/212283/r/Undisciplined?commentId=212309**
#### Excerpt
> > HODL contracts and HODL invoices are the same right?
> No. "Hodl contracts" is the name of a python app I made that *uses* hodl invoices to emulate an oracle service on the lightning network. Hodl invoices allow a merchant -- Bob -- to programmatica […]
*1384 sats \ 2 replies \ @super_testnet*
From **[Who Wants To Build Sports Betting on Lightning?](https://stacker.news/items/212283/r/Undisciplined)** by @kr in ~bitcoin
----
### 🏆 `TOP STACKER`
2nd place **@nerd2ninja** (1st hiding, presumed @siggy47)
*1562 stacked \ 1354 spent \ 1 post \ 3 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*25.4k stacked \ 0 revenue \ 28.5k spent \ 78 posts \ 119 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 21, 2022 📅
----
### 📝 `TOP POST`
**[What's your default tipping rate? Why?](https://stacker.news/items/47824/r/Undisciplined)**
#### Excerpt
> Mine is 888 - I just changed it from the default 10.
I want to grow this ecosystem, and 10 sats won't do a whole lot. I also enjoy the number 8, infinity. […]
*1326 sats \ 64 comments \ @mikhael \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/47824/r/Undisciplined?commentId=47845**
#### Excerpt
> i go with 99, hoping someone else gets the satisfaction of tipping 1 sat and seeing an even 100.
i’ve also been thinking about ways to get more people to tip, or at least to change their default tip amounts to something more significant... thoughts […]
*3126 sats \ 10 replies \ @kr*
From **[What's your default tipping rate? Why?](https://stacker.news/items/47824/r/Undisciplined)** by @mikhael in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*39k stacked \ 47.4k spent \ 1 post \ 24 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*145.3k stacked \ 0 revenue \ 182.8k spent \ 124 posts \ 335 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 21, 2021 📅
----
### 📝 `TOP POST`
**[TheBword starts soon](https://stacker.news/items/446/r/Undisciplined)**
Link to https://www.thebword.org/c/track-1-demystifying-bitcoin
*14 sats \ 10 comments \ @notgeld \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/438/r/Undisciplined?commentId=441**
#### Excerpt
> I sadly do not hack on main chain stuff yet. I've heard its hard to get a whole bitcoin on testnet.
*100 sats \ 0 replies \ @sha256*
From **[Testnet faucets. Please add yours ](https://stacker.news/items/438/r/Undisciplined)** by @notgeld in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@sha256**
*0 stacked \ 2 spent \ 0 posts \ 1 comment \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*96 stacked \ 0 revenue \ 284 spent \ 17 posts \ 36 comments*
originally posted at https://stacker.news/items/616182
-
#### Next new resources about the MiniBolt guide have been released:
* 🆕 **Roadmap**: [LINK](https://github.com/orgs/minibolt-guide/projects/1)
* 🆕 **Network Map** (UC): https://bit.ly/minibolt_netmap
* 🆕 **Nostr community** (Reddit clon) (delete '[]'): https[:]//satellite[.]earth/n/MiniBolt/npub1k9luehc8hg3c0upckdzzvusv66x3zt0eyw7290kclrpsndepz92sfcpp63
* 🆕 **Linktr FOSS** (UC) by [Gzuuus](nostr:npub1gzuushllat7pet0ccv9yuhygvc8ldeyhrgxuwg744dn5khnpk3gs3ea5ds): [LINK](https://linktr.minibolt.info)
* 🆕 **Donate webpage**: 🚾 [Clearnet LINK](https://donate.minibolt.info) || 🧅 [Onion LINK](http://3iqm7nidexns5p6wmgc23ibgiscm6rge7hwyeziviwgav4fl7xui4mqd.onion/apps/Li3AtEGDsqNmNddv6rX69taidm3/pos)
* 🆕 **Contact email**: [hello@minibolt.info](mailto:hello@minibolt.info)
-
Čo nám prinášajú exotické protokoly ako Nostr, Cashu alebo Reticulum? Šifrovanie, podpisovanie, peer to peer komunikáciu, nové spôsoby šírenia a odmeňovania obsahu.
Ukážeme si kúl appky, ako sa dajú jednotlivé siete prepájať a ako spolu súvisia.
-
Podnikanie je jazyk s "crystal clear" pravidlami.
Inštrumentalisti vidia podnikanie staticky, a toto videnie prenášajú na spoločnosť. Preto nás spoločnosť vníma často negatívne. Skutoční podnikatelia sú však "komunikátori".
Jozef Martiniak je zakladateľ AUSEKON - Institute of Austrian School of Economics
-
Ako sa snažím praktizovať LunarPunk bez budovania opcionality "odchodom" do zahraničia. Nie každý je ochotný alebo schopný meniť "miesto", ako však v takom prípade minimalizovať interakciu so štátom? Nie návod, skôr postrehy z bežného života.
-
# My suggestion for scaling up and staying humble
## The original protocol design was "good enough for now"
When Nostr was invented and got started with developing implementations, the *Original Devs* (ODs) were convinced this was going to be big... maybe... someday... hopefully.
But, whatever they did at the moment should definitely scale up a bit and be a bit flexible, to attract innovators and keep up the momentum. So, they designed the protocol to be open and left the specifications a bit vague and very high-level, so that nobody was forced into a particular implementation, in order to adhere to the spec. And they put the specs into a GitHub repository and managed them by a committee of collaborators, who were generally open to changes to the specs, including breaking changes.
That was smart. And it was "good enough for now"... back then. After all, Nostr (and the associated wiki and modular article specs) hadn't been invented, yet, so they couldn't write the protocol in the protocol before the protocol existed. They're good, but not _that_ good.
What they specifically wrote, into the [Nostr Protocol](https://github.com/nostr-protocol/nips) was:
> To promote interoperability, we standards (sic) that everybody can follow, and we need them to define a **single way of doing each thing** without ever hurting **backwards-compatibility**, and for that purpose there is no way around getting everybody to agree on the same thing and keep a centralized index of these standards...
>
> Standards may emerge in two ways: the first way is that someone starts doing something, then others copy it; the second way is that someone has an idea of a new standard that could benefit multiple clients and the protocol in general without breaking **backwards-compatibility** and the principle of having **a single way of doing things**, then they write that idea and submit it to this repository, other interested parties read it and give their feedback, then once most people reasonably agree we codify that in a NIP which client and relay developers that are interested in the feature can proceed to implement.
# I disagree with this statement.
I don't disagree with what they _meant_, or what they _wanted_, I disagree with what they specifically wrote.
Standards (defined as prose specifications) are not the only -- or even best -- way to ensure interoperability or to check for backwards-compatibility. And, as they later note, basing a protocol off of implementations is arguably worse (but faster) than using specifications, as implementations have a life of their own and are sometimes simply shoddy or buggy, their content eventually differs from what ends up in the final standard, or there are soon multiple implementations covering the same spec "in theory", but not in practice, so that their events are incompatible.
And then the inevitable, heated discussion begins:
* Which implementation is the Real Standard™?
* Who controls the Real Standard™?
* How is the Real Standard™ spec supposed to be written?
* Does everything have to be in the same file type or markup language? If not, how can we ensure compatibility?
* What is realistic content for the data files?
* Is the Real Standard™ including all of the information needed to ensure interoperability, but not anything more, without reducing innovation and artificially forcing consensus by encouraging copy-paste or forking of product code?
## There is a third way: write the test first
We actually do not need standards to define a single way of doing each thing. A *test* is another way, and I think it is the best (i.e. the most-efficient and most-effective) way.
Specifically, I think we can borrow the simple behavior-driven design (BDD) language called Gherkin (or something similar), which is used to write dynamic specifications: i.e. implementations that test adherence to a set of rules, rather than an implementation that uses the rules to perform some task for an end user.
Gherkin simply allows you to create standard scenarios and test data and write the tests up in a way that can be performed manually or through automation. For example ( [source](https://www.pragmaticapi.com/blog/2013/01/21/bdd-atdd-for-your-agile-rest-api-part-2/) ):
![Gherkin example](https://res.cloudinary.com/jhrmn/image/upload/v1362658828/groovy_cucumber_twitter_lpg6yj.png)
(For a concrete example of such a TDD Protocol for Nostr, please see the [nostr-voliere repo](https://github.com/schmijos/nostr-voliere/tree/main/features) from nostr:npub1axy65mspxl2j5sgweky6uk0h4klmp00vj7rtjxquxure2j6vlf5smh6ukq .)
## This really is better
This TDD Protocol design would have some downsides and some upsides, of course, like any change.
### Downsides
* You can't write a TDD spec by yourself unless you understand basic software functionality, how to define an acceptance test, and can formulate a use case or user story.
* The specs will be more informative and agnostic, but also longer and more detailed.
* Someone will have to propose concrete test data (i.e. a complete json event) and spec interlinking will be explicit, rather than writing "...", "etc.", or "sorta like in that other section/doc, but not really" all over the place.
* The specs will tend to focus on positive cases, and leave off error-handling or most edge-cases, so developers can't use them to replace unit tests or other verification of their product.
### Upsides
* The specs will be concrete and clear, in a type of pseudocode, while leaving the actual implementation of any feature up to the individual developer, who uses the spec.
* The specs will be orderly and uniquely-identifiable, and can have hierarchy and granularity (major and minor tests, optional tests, tests only under certain conditions, etc.)
* Deciding whether changes to the spec are breaking changes to the protocol would be simple to determine: Does the previous test still pass?
* Specs will always be final, they will simply be versioned and become more or less defined over time, as the tests are adjusted.
* Product developers will feel less like they "own" particular specs, since their implementation is actually what they own and the two remain permanently separate.
* Developers can create an implementation list, defining specific tests in specific versions, that they adhere to. This makes it more transparent, what their product actually does, and lowers their own documentation burden.
* Rather than stalking the NIPs for changes, or worrying about what some other implementation someplace has built, developers can just pull the repo and try running the relevant tests.
* Each product developer can test the spec by trying to perform or automate/run it, and request changes to ensure testability, raising the quality of the spec review process.
This is already a lot to think about, so I'm just going to end this here.
Thank you for reading.
-
Well, folks, I’m starting to see the light at the end of this data tunnel. After a seemingly endless 12-hour format, my stubborn SSD finally decided to cooperate. With that hurdle cleared, I dove headfirst into the world of data migration.
![IMG_20240721_105939516_HDR.jpg](https://img.blurt.world/blurtimage/rubenstorm/92e8f26add798d624d933fdbafde692a43efd3d9.jpg)
Last night was a whirlwind of activity: changing to a new bed, data copying, and a valiant battle against a pesky mosquito net. I managed to keep my sanity intact until around 2 am, when sleep finally claimed me. But let's not forget the culinary highlight of the evening: *Wali Maharagwe* - a simple yet satisfying rice and bean dish that hit the spot perfectly. Check out this photo, it's a local favorite and incredibly affordable.
This morning, I woke up feeling not really refreshed and still tired, but ready to conquer the remaining data challenges. After a cold shower, quick breakfast and a chat with a fellow traveler, I’m back at it, transferring data from one SSD to another. Fingers crossed this one behaves itself!
One thing's for sure, the temperature has taken a drastic drop compared to the warmth of Mombasa. I'm grateful for my trusty fleece as I shiver my way through the day. It's a stark contrast to the 27 degree Celsius I was enjoying just a short distance away.
So, while I continue my data wrangling, I hope you all are enjoying a relaxing Sunday. Stay tuned for more updates on my progress!
![IMG_20240720_192314289_HDR.jpg](https://img.blurt.world/blurtimage/rubenstorm/df05d8c9fe16dacf0a929345f894d44359547ccf.jpg)
### Online
[Profile Page](https://rubenstorm.de.cool/) | [Support Page](https://rubenstorm.12hp.de/)
> If you'd like to show your appreciation, you can send a tip via the Lightning Network to rubenstorm@sats.mobi. (QR code available in the image)
-
Mastodon is obviously an already established and semi-well known social network--A user-owned network of communities. But one of the biggest issues I always see the corporate social media denizens talk about, is that the loss of content when moving instances (sometimes, _you have_ to do this) is one of the biggest turn-offs. And that's definitely something I can attest to. If I were still on mastodon.social, from my humble posting-beginnings in 2017, I'd have a post history longer than five football fields and probably plenty to cringe at.
The question remains: _Why haven't we been able to take this history with us_ ... if every instance you've ever been federated with is _keeping that data._
What do I mean by this?
Every single time you post on Mastodon, your words get pushed out to other instances (servers), and in order for people to be able to read what you said, a copy of it is downloaded _onto that server_. It does this very quickly, so quick that it seems as though it happens in real-time. And, for all intents and purposes, I'm not inclined to believe that it doesn't.
If you've ever had an account on an instance that isn't mastodon.social, and you have a posting history, and you remember your username: Go over to [mastodon.social](https://mastodon.social), create a burner account, and then search up that username.
Yeah, you see that? Your entire body of posting-work is still there, even if you deleted your account, even if the instance you were on is long dead.
_This is obviously not true for every server. Some choose to delete old content to make space for the new, but there are many Mastodon servers out there retaining everything ever posted, like mastodon.social._
But what point am I trying to get at here? As far as you're probably concerned, even if that data still exists, and your account, and instance does not, you can't access it, right? You can't reclaim everything you've said and done, and put it back where it belongs: Under _your ownership_.
Or, what if you could?
If you've been following my posting history on here in the past week, I've been [screwing around with Nostr](https://cmdr-nova.online/2024/07/11/nostr-the-strangest-and-clunkiest-twitter-replacement/), and [musing about Bluesky](https://highlighter.com/a/naddr1qvzqqqr4gupzpu9v5anc855x7htnx6cj3zy0thel9x0qwxtnatcru5sqj8wwxvrwqqk9w6re94pxcat9wd4hjt2hd9kxct2swfhkyctzd3uj6nn9wejhyt2zv5k5japdxpnn2ut8wquxpwtd). Two networks using a similar protocol, that I feel are getting the idea of what I'm thinking of here, only about 75% correct.
Bluesky's problem, is that the protocol is owned by ... the _owners_. The corporation. Your keys report to _their_ server, and in return, so does your content. A central server that has a lot of different keyholes.
Not really the optimal experience, because you can't create your own server, and build your own community, and _survive_ if the main Bluesky datacenter goes out, right? The only thing you can do is host your own software that essentially just holds keys and content, and, as far as I know, all of that will be worthless if the devs ever go offline with the main server.
So then there's Nostr, where all of your data lives everywhere on the network, and as long as you have your keys, you can use _any_ app that utilizes the protocol, and continually access everything you post, _forever_. You can't be banned, but you also can't block anyone. And any random stranger can take your _public key_ from your profile, and view your entire everything (as I've said before).
I would say this is also not really that optimal. The idea is great, but the execution trades privacy for extreme versatility.
This brings us back to Mastodon, which is still in active development, still receiving new features, and ... well, kind of also only has the whole thing about 75% correct.
There's that last little bit we're missing here.
The user should have the option of a community, a personal community _away_ from what's viewed to be the central Mastodon server **and** have their privacy, _but_, they should _also_ be able to retain their posts, and their bookmarks, and their likes, and their lists, and _everything else_.
How do we accomplish this?
Keys.
The entire fediverse (for the most part) _already has all of your content_. So, if every user had a key tying them to that content, no matter where you're located, be it mastodon.social, or mkultra.monster, then you could just go wherever you want?
That's the idea, at least.
Your content becomes immortal, immutable, and that's the one missing puzzle piece to ActivityPub. I am 100% sure of this, and I don't know what the developer stance is on the topic of keys and immutable data, but I fully believe this is something that should be on their roadmap. Yes, storage space is a concern ... but holding onto the text content of a post is much easier, and smaller in size, than holding onto larger pieces of data (like photos, and videos). If you scrolled back to five years ago and found something you posted, but the only thing you could see was the text, would you be all that disappointed?
I guess that part is subjective.
Mastodon is already setup nearly identical to Nostr, except users live _on the relays_. So just ... add user generated keys to the mix, and I am 100% double-dippin' certain that most of the naysayers being spammed by engagement farming on Threads and Twitter will jump ship.
Mark my worms.
If it isn't. Well. They're making a huge mistake.
-
*Yesterday's edition https://stacker.news/items/613971/r/Undisciplined*
* * -
### July 20, 2023 📅
----
### 📝 `TOP POST`
**[Frostsnap - Easy, personalized, secure bitcoin multisig for everyone](https://stacker.news/items/211803/r/Undisciplined)**
Link to https://frostsnap.com/introducing-frostsnap.html
*6034 sats \ 19 comments \ @utxoclub \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/211668/r/Undisciplined?commentId=211676**
#### Excerpt
> Check out @super_testnet's super simple Superstore: https://stacker.news/items/167945
*614 sats \ 0 replies \ @OneOneSeven*
From **[Making a Store](https://stacker.news/items/211668/r/Undisciplined)** by @Undergotten in ~nostr
----
### 🏆 `TOP STACKER`
2nd place **@utxoclub** (1st hiding, presumed @siggy47)
*4679 stacked \ 1826 spent \ 1 post \ 5 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*48.4k stacked \ 0 revenue \ 55.2k spent \ 84 posts \ 166 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 20, 2022 📅
----
### 📝 `TOP POST`
**[I'm John Cantrell, entrepreneur, spiral grantee, and creator of Sensei. AMA](https://stacker.news/items/47346/r/Undisciplined)**
#### Excerpt
> I'm @johncantrell97 and I've been working on and around Bitcoin for over a decade now. Most recently I've been working on Sensei as part of a Spiral grant.
*112.7k sats \ 54 comments \ @JohnCantrell97 \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/47509/r/Undisciplined?commentId=47521**
#### Excerpt
> "As of the end of Q2, we have converted approximately 75% of our Bitcoin purchases into fiat currency."
> Tesla didn't HODL. Or deploy as liquidity on Lightning to earn yield.
> Some just have to learn the hard way 🤷♂️ […]
*329 sats \ 4 replies \ @holzapfelbaum*
From **[Tesla Sells 75% of Bitcoin Holdings](https://stacker.news/items/47509/r/Undisciplined)** by @kr in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*11.9k stacked \ 108.1k spent \ 4 posts \ 19 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*130.5k stacked \ 0 revenue \ 140k spent \ 113 posts \ 249 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 20, 2021 📅
----
### 📝 `TOP POST`
**[The Bitcoin Miners banned from China are coming back online in force, bois!](https://stacker.news/items/424/r/Undisciplined)**
#### Excerpt
> https://bitinfocharts.com/comparison/bitcoin-hashrate.html
> Don't let your lettuce hands leave you with regrets. Stack those sats!
*7 sats \ 3 comments \ @03cdf4abc3 \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/428/r/Undisciplined?commentId=429**
#### Excerpt
> We're supposed to believe he's acting in the interest of the people to support of a censorship resistant platform for the exchange of value while not doing anything of note to further the interests of the people regarding a censorship resistant platform for exchanging words that he just happens to be the CEO of. They deplatformed the president of the united states.
*1 sat \ 1 reply \ @03cdf4abc3*
From **[Forget Elon Musk, Jack Dorsey Is the Hero Bitcoin Deserves for the Coming Years.](https://stacker.news/items/428/r/Undisciplined)** by @shawnyeager in ~bitcoin
----
### 🏆 `TOP STACKER`
No top stacker
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*43 stacked \ 0 revenue \ 71 spent \ 9 posts \ 14 comments*
originally posted at https://stacker.news/items/615090
-
Another day, another adventure in tech hell. Let’s start with last night. I was hoping to get some inspiration from an audio story before bed, but my brain decided to short-circuit. Woke up at 4 AM with headphones still on, no recollection of the story, and a general sense of disorientation.
![IMG_20240720_114559332_HDR.jpg](https://img.blurt.world/blurtimage/rubenstorm/d652231d1e0a1f728e047a7552ddccf4994c0e0e.jpg)
Fast forward to this morning, I managed to escape the dorm for a quick shower and some much-needed coffee. Feeling somewhat human again, I tackled the beast that is my external SSD. It’s a 1TB behemoth that decided to be extra dramatic this morning. I’ve been staring at a “12 hours to go” message for what feels like an eternity. Who knew formatting a drive could be such a suspenseful ordeal?
This drive has been giving me trouble lately. It’s like it has a grudge or something. Constantly going into read-only mode, refusing to save large files. I think my 25,000 travel photos might be a bit too much for it. It’s not even that old! I switched from a heavy HDD to this thing for portability, and now it makes me trouble.
I’m hoping that once this formatting nightmare is over, I’ll have a happy, healthy SSD. I’ve decided to format it as exFAT, so at least it should be compatible with everything. But then I have two more drives to deal with. This is turning into a full-blown data relocation project. Why didn’t I do this when I was chilling in Kilifi?
Oh well, at least the weather is improving. Yesterday was the coldest day since I’ve been in Africa. Even my trusty fleece got some use.
Wish me luck on this digital odyssey!
-
Tento rok vás čaká workshop na tému "oranžové leto" s Jurajom Bednárom a Mariannou Sádeckou. Dozviete sa ako mení naše vnímanie skúsenosť s Bitcoinom, ako sa navigovať v dnešnom svete a odstrániť mentálnu hmlu spôsobenú fiat životom.
Na workshop je potrebný [extra lístok](https://pay.cypherpunk.today/apps/maY3hxKArQxMpdyh5yCtT6UWMJm/crowdfund) (môžete si ho dokúpiť aj na mieste).
Pre viac informácií o oranžovom lete odporúčame pred workshopom vypočuťi si [podcast na túto tému](https://juraj.bednar.io/podcast/2024/04/13/oranzove-leto-stanme-sa-tvorcami-svojho-zivota-s-mariannou-sadeckou/).
-
It's been a while since I've scolded everyone for their miserliness. The recent archiving of some popular territories inspired me to make another attempt. Plus, there are quite a few new stackers who haven't been scolded, yet.
I'll keep this one short and sweet:
1. Territory owners need to make about 100 kilosats per month.
2. They earn their sats through posting fees.
3. OP's need to recoup their posting costs through zaps.
Therefore, when there's content you like, zap big and zap often.
If there's a lack of content you like, cowboy up and write it yourself. We just might make it worth your while.
Zapping more can pay for itself through the rewards system (see: https://stacker.news/items/287074/r/Undisciplined and https://stacker.news/items/473181/r/Undisciplined)
Also, zapping more will make Stacker News more aligned with what you want it to be: https://stacker.news/items/523858/r/Undisciplined.
originally posted at https://stacker.news/items/614066
-
*Yesterday's edition https://stacker.news/items/612715/r/Undisciplined*
I enjoyed the discussion in the 2022 AMA. I'd love to have some more stackers reporting more on the developing world. How is the bitcoin ecosystem developing? What problems is it solving for people?
* * -
### July 19, 2023 📅
----
### 📝 `TOP POST`
**[Bitcoin Optech Newsletter #260](https://stacker.news/items/211185/r/Undisciplined)**
Link to https://bitcoinops.org/en/newsletters/2023/07/19/
*2733 sats \ 0 comments \ @ssaurel \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/211250/r/Undisciplined?commentId=211288**
#### Excerpt
> Been very busy recently replicating the Indranet architecture to the guy who's been paying me to design and build it. It's been great for me in terms of helping delineate the core principles of the design and to spot some possible design flaws, as we […]
*340 sats \ 0 replies \ @l0k18*
From **[What are you working on this week?](https://stacker.news/items/211250/r/Undisciplined)** by @sn in ~meta
----
### 🏆 `TOP STACKER`
2nd place **@nerd2ninja** (1st hiding, presumed @siggy47)
*1335 stacked \ 2158 spent \ 0 posts \ 6 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*22.1k stacked \ 0 revenue \ 29.9k spent \ 110 posts \ 192 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 19, 2022 📅
----
### 📝 `TOP POST`
**[I'm Alex Gladstein. CSO of the Human Rights Foundation. AMA](https://stacker.news/items/47002/r/Undisciplined)**
#### Excerpt
> I am also the author of Check Your Financial Privilege.
*93.1k sats \ 48 comments \ @gladstein \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/47002/r/Undisciplined?commentId=47018**
#### Excerpt
> Bitcoin is likely playing a nascent role in cross-border payments between South Korea, China, and NK, but it's too early too tell
*470 sats \ 0 replies \ @gladstein*
From **[I'm Alex Gladstein. CSO of the Human Rights Foundation. AMA](https://stacker.news/items/47002/r/Undisciplined)** by @gladstein in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*13.5k stacked \ 92.4k spent \ 3 posts \ 13 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*126.4k stacked \ 0 revenue \ 132k spent \ 94 posts \ 300 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 19, 2021 📅
----
### 📝 `TOP POST`
**[Dashboard: Bitcoin KPIs](https://stacker.news/items/408/r/Undisciplined)**
Link to https://bitcoinkpis.com/
*3 sats \ 3 comments \ @nathanael \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/398/r/Undisciplined?commentId=409**
#### Excerpt
> nice - i added it [there](https://github.com/fiatjaf/awesome-lnurl/commit/e7f8dee052cff5f4a4434733e8a911ddcb9d55c6) - welcome
*2 sats \ 1 reply \ @nathanael*
From **[Stacker.News Community](https://stacker.news/items/398/r/Undisciplined)** by @gmd in ~bitcoin
----
### 🏆 `TOP STACKER`
No top stacker
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*11 stacked \ 0 revenue \ 15 spent \ 1 post \ 7 comments*
originally posted at https://stacker.news/items/613971
-
Just started looking into powdered mushrooms and was wondering if stackers have used or use any for supplementation. I have previously tried Lion's mane and chaga, and since haven't but am interested in playing around with them again. Cordyceps does sound interesting, if anyone has any experience taking them.
originally posted at https://stacker.news/items/613694
-
![Tree of Relays](https://image.nostr.build/048be746460e247f7c2606c3342eea5c4c2e86d37b5666889b2ee29584f2210d.png)
Each relay selects a branch from above and starts serving.
Some big machines in the top layers can handle more. Smaller machines in layers below are needed for decentralization and scalability.
Some top layer machines can act in sync only mode, efficiently distributing notes among layers.
Relay or the admin posts a special kind for advertisement of the relay:
```
{
"pubkey": "...pubkey of admin or the relay itself..",
"kind": 30202,
"tags": [
["d","..10"],
["ip4","111.222.33.44:443","primary"],
["ip6","abc:def::443","backup"],
],
...
}
```
The above example says this relay will handle the note id's that are ending with bits ..10. In this case it is going to handle about 1/4th of the network.
Primary way of reaching at this relay is through ip 111.222.33.44. There is also a backup server.
Clients can accept this advertisement based on web of trust or historical reliability of the npub. Or other npubs can measure the reliability of this relay and send reactions to this note. Clients then can see these reactions and rank these services.
Solves:
- Possible future DNS ban issues:
I don't know when or if DNS will be an issue for Nostr. The above design can help with the situation.
- Scalability:
If 1 million users join the network at the same time, the machines that are handling ".." i.e. all of the traffic may fail. But if the clients are using relays on other layers, the load will be efficiently distributed to many machines. The failure of layer 0 and 1 will not stop the network.
Every layer can operate independently without the other layers (in theory).
- Traffic efficiency:
A client has to query many relays, depending on what it wants to do. It may choose to stay efficient (talk to top layers) on mobile traffic, or it may choose to help decentralization over wifi. The notes that match the queries will not be repeated as many times as current design, because relays will hold a portion of the network.
- Storage efficiency:
Relay operators can just save a part of the network that they are responsible for, on NVME drives. The rest of the network they can save in hard drives. In case of major failure the hard drives can still have a copy.
- Speed:
Since the notes will come from many different relays at the same time, there may be a slight speed increase.
- Decentralization:
If the top layer relays collude and start banning the other layers can still continue to serve notes.
- Backup relay:
In case a relay instance fails, users can find the backup server on the same note.
- Zero down time migration:
The ability to define a backup server allows zero down time migrations. An operator can set the primary to the new server and backup to the old server and do migration and continue without interruption.
- Efficient sync among servers:
A relay has to sync with 3 servers, 1 above, 2 below. But it can do 6 or 9 depending on how much reliability it wants.
- Writing to logN relays:
Clients has to write to logN relays (i.e. 1 relay in each layer), to effectively distribute their notes to everyone and also to help with decentralization.
-
These are my very preliminary RB rankings, my only research being the [RotoWire depth charts](https://www.rotowire.com/football/nfl-depth-charts/) and player notes.
**Tier 1
Christian McCaffrey**
He’s 28 years old, has 1,402 career carries including the postseason and weighs only 205 pounds at 5-11. But McCaffrey is arguably the greatest running back all time, when you include the pass catching, and he’s in the league’s best system for backs. Health is really the only variable, and anyone can get hurt.
**Tier 2
Breece Hall, Bijan Robinson, Saquon Barkley, Jonathan Taylor**
These are the three-down backs with elite skill sets. Hall should be better another year removed from the ACL tear and playing with a better QB, Robinson should get the heavy usage he merits with Arthur Smith gone, Barkley finally gets a quality offensive line and QB and Taylor plays next to a running QB and has little competition for carries.
**Tier 3
Jahmyr Gibbs, De’Von Achane, Kyren Williams, Travis Etienne**
These are the little guys, all of whom can run and catch passes, plus Etienne. Williams is the biggest risk as an early-down high-workload guy, but he was also insanely productive when he played in Sean McVay’s lead-back-friendly system. Gibbs seems like peak Alvin Kamara, only faster and Achane in the Miami offense was the most efficient back in league history by a mile. I originally had Etienne in Tier 4, but he doesn’t have much competition for the job and should be a three-down back.
**Tier 4
Isaiah Pacheco, Josh Jacobs, Joe Mixon, James Cook, David Montgomery, Derrick Henry, Kenneth Walker, Racchad White, James Conner, Rhamondre Stevenson, Zack Moss, Javonte Williams, Devin Singletary **
This is a big tier, but these are all the incomplete players guaranteed reps if healthy. Pacheco could be moved to Tier 3, but Patrick Mahomes doesn’t check down to the back all that often, and Pacheco rarely gets to 20 carries in a game. Jacobs is a beast but has a ton of mileage in a short span and fell off last year. Mixon is another old warhorse with a lot of mileage, albeit also in a good situation. Cook suffers due to Josh Allen’s goal-line prowess and tendency to take off rather than check down. Montomgery will cede the third-down work, Henry is old and doesn’t catch many passes, Walker splits carries, gets hurt often, White is a ham and egger who happened to see a huge workload, Conner is 29 and always misses time, Stevenson is a workhorse on a bad team, Moss is an average back in a good spot, Williams is another year off the injury, but on a bad team and Singletary is the clear lead back on a poor offense.
**Tier 5
Tony Pollard, D’Andre Swift, Jonathan Brooks, Alvin Kamara, Brian Robinson, Jaylen Warren, Austin Ekeler, Chuba Hubbard, Zamir White, Raheem Mostert, Aaron Jones, Najee Harris, Nick Chubb, Tyjae Spears, Trey Benson, Zach Charbonnet
**
These are mostly timeshare backs, with the exception of Kamara whose efficiency fell through the floor and White who I’m not convinced will lock down the job all year.
**Tier 6
Kenneth Gainwell, Rico Dowdle, Gus Edwards, Jerome Ford, Zeke Elliott, Roschon Johnson, Jaylen Wright, Bucky Irving, JK Dobbins, Antonio Gibson, Chase Brown, Elijah Mitchell, Jaleel McLaughlin, Tyler Algeier, MarShawn Lloyd**
This is the backup tier, all of whom probably need an injury ahead of them to break out.
These rankings will change, but I like to get my preliminary ones down on paper before I get influenced by ADP and training camp hype.
-
[Bitvora](https://bitvora.com) is a unified Bitcoin & Lightning rail. A single API with no infrastructure overhead.
So you don’t have to go through this:
![Summary of Total Cost of Ownership (TCO) calculator for DIY Lightning](https://i.nostr.build/Sni1cFnE45Z5nWj0.png)
App engineers and product people, this is for you: two lines of code—bitcoin sent.
https://i.nostr.build/8Wp3jNhqHcml3pxl.gif
[Bitvora](https://bitvora.com) makes on-chain and Lightning payments simple, fast, and friction-free.
I’ll be at [The Bitcoin Conference](https://bitcoin2024.b.tc/2024) in Nashville next week, and I’d love to give you a sneak peek of the product. We’ll be in an invite-only beta.
**If you’ll be in town, DM me or leave a reply below to set up a meeting.** I can’t wait to show you what we've built.
-
What makes Nostr and Bitcoin so extremely exciting to me, as a designer, is that they offer an entirely new design space. Something we never had before. Something that we can, for the first time ever, call…
A Free Market.
And not just that, but one where everyone speaks the same language. One that acknowledges that Sovereignty doesn’t go that far without Interoperability.
Since this is literally a first, it seems terribly naïve to assume that we can just copy what works for Big Tech’s walled gardens and add some Freedom sauce on top. I’ve been sketching out things like “Twitter but with Zaps”, “Youtube but with many algo’s”, “Patreon but with Bitcoin”, etc... for long enough to realize the harsh limits of such an approach.
Instead I’ve found it more fruitful to start by looking at the characteristics of this new digital, interoperable free market and, only then, look for similar existing benchmarks that can serve as inspiration.
Because the convergence of the various Freedom Tech protocols (Nostr, Bitcoin, Cashu, …) are such a huge game changer for online monetization specifically, it seems even more naïve to just start copying Big Tech on that front, without proper examination.
If we did just play copycat, Monthly Subscriptions and Paywalls would be the obvious answer. This article dives into why they’re not.
## Free as in Competition
In a free market, it’s going to be hard for you to set a price if you don’t have something scarce to sell. Unlike in a Big Tech walled garden, there’s no one blocking competitors from entering the market. That includes competitors that offer your stuff!
If what you create can easily be copied, it will.
Yes, Content creators, FOSS devs, Designers, … that’s you.
Charging for your article, podcast, app or movie doesn’t end well when people can find a free upload right next to yours. Open protocols remove the friction of having to go PirateBay for that sort of stuff. Luckily, they also remove the friction for people to value your content directly (#V4V Zaps) and for you to launch unique and scarce products related to what you do (Community, Merch, …).
Even if you have a scarce product to sell though, Nostr sets the bar lower than ever for others to start competing with your product. Every part of your product!
Interoperability breaks products and services down to their constituents and opens up every layer of the stack to competition. Currently most SAAS business models are basically very expensive monthly subscriptions for some hosting and computation. And guess what Nostr is extremely good at → Discovering and switching providers of hosting and computation.
In a competitive context like that, you need a monetization model that lets you adapt fast.
If you choose to monetize with monthly subscriptions, you’ll be updating your subscription tiers every month, if not every week, just to remain competitive. If that’s the case, then what’s the point? Why not just have a cash price list?
## Free as in Volatile
The reality of a free market is that price-discovery never takes a break. This is especially true for a market that uses a money that is still being discovered as money by 99% of the world. A money that, just like Nostr, only saw the light of day because its creators saw total chaos ahead without it.
Bitcoin and Nostr do #fixthemoney and do #fixtheinternet, but all that fixing isn’t exactly taking us on a smooth ride.
Smooth rides are stable rides, stable prices, stable amounts of locked-in users, stable ties with the state, etc… They are everything Freedom Tech is taking us on exciting journey away from.
Again, adaptability is key and your business model needs to reflect that. Monthly subscriptions don’t handle volatility well. It takes a lot of confidence, extremely loyal customers and liquid reserves to survive bumpy roads. That’s a lot of capital that probably has better uses. Better uses that Big Tech monopolies can ignore, but your competitors won’t.
## Free as in “Number Go Up”
Denominating your subscriptions in “stable” fiat-currencies isn’t going to help neither.
The mental cost for customers is only low as long you don’t have juggle more than 21 subscriptions. (yes, I completely made up that number but it’s somewhere around that, so bear with me).
Given that Subscription Hell is already a thing in Fiat-land once you go beyond that number, just try to imagine the amount of subscriptions you’d be handling in Nostr-land. Your 1 Netflix subscription suddenly became 6, to different service providers, plus 20 to all your favorite creators. Then add the subscriptions for all other internet use cases you can think of. Then add some more for the ones that you can’t even imagine yet.
This is not an overstatement. It is very very unlikely that your service happens to be the only one that subscriptions would work for. If they appear to work for you, every competitor or similar use case will be copying that model in no time.
So if they work, there would be a loooot of subscriptions.
That’s also a looot of:
- recurring payments to forget about
- different time periods to oversee
- extra effort to go and unsubscribe or switch tiers
- users thinking ‘I didn’t even really use this!”
- users also thinking “What am I paying for exactly?”
In short, that’s asking for frustrated, disappointed and confused customers.
These subscriptions would then also need to be handled on top of all the #V4V Zaps and cash payments that are inevitably happening as well. Unlike Big Tech products you don’t get to just pick Subscriptions as the only option. You will have to be optimizing both the Wallet and Social UX/UI for all of these types of payments. Something I naïvely tried and quickly gave up on. It overcrowds interfaces, makes different applications less interoperable and creates a very confusing user experience.
And if you denominate everything in fiat, you add even more confusion since the Zaps from others are denominated in Bitcoin. That suddenly makes them variable from the fiat-denominator’s perspective. Other denominations work when you only have two parties (f.e. seller and buyer), not when you have groups of people zapping away at an article.
For #V4V to gain traction zappers need recognition. If it’s completely variable, and thus unclear, who the top zappers of a piece of content are or if the biggest patrons of a creator are only visible if you go to some specific subscriber dashboard, you’re creating confusion and diluting recognition.
Zaps are awesome, nearly universally implemented, very simple to design for and they might just be enough. We’re mostly still early. Creators can’t even reply to Zaps yet. The first prototypes for frictionless Cashu zaps are only just coming out. Let’s explore all that further before we start adding subscriptions to the UI, the users’ mental load and the app developers endless list of “things to implement”.
If the current zap-rate of daily Nostr users (myself included) tells me anything, it’s that sending around small payments all the time isn’t really the issue. The mental cost for the user happens mostly at the level of:
”How the f*ck do I keep track of all this?”
“How do I fit this in with my regular monthly expenses?”
Subscriptions are only one answer to this. If they happen to still somehow solve the above issues for some use cases, we still have to find solutions anyway for the micro-payments side of things.
My point is thus mainly: we might as well start there and, who knows, we don’t even need subscriptions. Or at least not as a standard to strive towards. No one is stopping you from using time-based pricing for things that are actually time-based or from creating micro-apps that let you set up recurring zaps and other fun stuff like that. Just don’t promise creators and merchants that they can base their business models on them.
If you’re talking to people that will only consider Freedom Tech if they can have the stability of a group of monthly subscribers, you’re probably not talking to the right people (yet!).
Freedom Tech = Responsibility Tech.
Both seller and buyers need tools that help them take that responsibility. Especially regarding payments, the currently available tools are far from optimal and have barely scratched the surface of what’s possible in the interoperable ocean of possibilities. I say this more as an invitation than as a complaint of course. We are — yes, I’m saying it again — sooooo early.
So how can we help buyers be responsible while “not having to think about every little payment”?
For rich npubs on a network denominated in #NGU technology the answer can be quite simple: They simply don’t have to really think about it.
In fact, poor npubs neither as long they can set a spending budget and have great tools and transparency to help them manage it.
They need something a bit like their “lives left” in a video game. Or something that resembles envelope budgeting: like going to the farmers market with only a limited amount of cash.
## Talking about farmers markets
Let’s look at what currently comes close to an interoperable free market in meat-space: Restaurant & Farmers markets.
- No one has copyrights on pizza Margherita or chicken eggs
- Entering the market is relatively cheap (food trucks, farmer stands, …)
- Customers can pick and choose where they get “every layer of the stack” (drinks here, main dish there and ice cream over there)
![Market Prices](https://cdn.satellite.earth/63cb408d80f354fa8adbf5260e6d2c37aa81d3cf2a8d890f24508d44d19c752a.png)
Now look at how these businesses monetize:
- Cash price lists
- Discounts (for special occasions, loyal customers, ..)
- Bundles (lunch menus, vegetable baskets, …)
Both restaurant owners and farmers have been trying out monthly subscriptions since forever, mostly because it would benefit them. But in places with high competition this has had exactly zero success.
What did have success is services that bundle farmers market food items for you and deliver it at your doorstep on a weekly basis. That’s what lower mental cost looks like.
Ironically, these services often started out with the monthly subscription model and have had to switch because customers needed more flexibility. Now, in their apps, they let users pick what day they want what, at what price.
And these apps are not even Nostr yet.
## Talking about Nostr markets
Nostr markets are not exactly farmers markets of course. But that is mostly good news.
Nostr, Bitcoin and Cashu, in particular, remove a lot of the friction that cash still has in the physical world. They enable seamless micro-payments. That’s an innovation worth embracing, not something to hide away behind subscriptions.
Embracing micro-payments means that every little thing can have a real-time price. A price that, once payed, you never have to think about again. And some, if not most, micro-payments are so micro that you don’t have to really think about them in the first place.
Especially if we’re talking about recurring events (pun intended).
If a chat message in a certain Community costs 2 sats, after your third message your added mental cost will likely be close to zero. At that point, the price is mostly a matter of transparency, fairness and spam prevention. When the Admin suddenly changes this price to 1000 sats however, you will need to think twice about posting such a message. But again, that is a good thing. That’s exactly what the Admin is monetizing.
He is curating for high signal content and conversations around a certain interest. His price list is one of his top tools for doing so. You pay him for being able to publish in his unique community. You “Pay Per Use” of that Community as a broadcasting channel for your messages, articles, videos, … You know everyone else does too.
Using monthly subscriptions for such a community would just invite abuse and poor quality.
It would be like an all-you-can-eat restaurant where everyone has an infinite stomach size, you’re all at the same table and only the loudest screamers get heard.
So the Admin would put limits on what you specifically get for your subscription (100 messages per month, 210MB of hosting, etc etc…). The members would then demand flexibility or go elsewhere.
The admin would then provide different tiers.
Yet, most members would still need flexibility more than they need flat rate monthly pricing.
At the same time, the Admin’s “Pay Per Use” competitors will still be cheaper. They don’t have the overhead of handling the uncertainty that comes with providing stable pricing for several months. Trying their offer out is also way cheaper than immediately having to pay a subscription. The admin, on the other hand, cannot really offer free trials if he doesn’t have the locked-in users to pay for them.
In the end, just like restaurants, the Admin will switch to “Pay Per Use” and will use discounts and bundles to his advantage.
As long as users have great tools to keep an eye their spending, this sort of outcome is a win-win for the whole ecosystem. What users tend to like most about monthly subscriptions for something is the guarantee that they will not exceed XXX amount of money on that thing for the month. Nothing is stopping us from building tools that provide the same guarantee without the complications of handling monthly subscriptions.
Since most Bitcoin wallets are not daily-spending wallets and most Nostr projects aren’t monetizing yet, hardly any attention has been spent on tools like this. They all copied bank apps and display your total amount of money and a chronological feed of your transaction history. There are several problems with this:
1. You don’t want to openly showcase your total balance to everyone around you when you open the app
2. Your total balance shows “everything you have”. That is a terrible benchmark for determining “what you can sustainably spend”.
3. It’s also a terrible benchmark for determining “what you earned this month”
4. Micro-payments make a chronological feed of all your transactions completely unusable
5. Zaps make a lot of your transactions social. Zaps and eCash blur the line between money and message. And messages require interaction that transactions don’t.
I think we can do better.
So, let’s try!
## Cash Budgets
Just like the previously mentioned “lives” in a game or the cash in your wallet for a night out, the first thing users will want to see is “How much do I have left?”. Since most people organize their budgets per month we can more specifically turn this into “How much do I have left this month?”. This means we need to allow users to set a monthly budget in the first place. Once that budget is set for the month, it facilitates all the rest.
This budget is their subscription now. Their Nostr subscription.
An interoperable subscription they can interact with in any trusted app.
![I’m the subscription now!](https://cdn.satellite.earth/cf45e83c47835dbf5e718a3cdd6c3fd19de119b007e3171a77622ec1f4385343.png)
And the best part: They pick the price.
They’re taking their responsibility and lowering their mental cost with one action.
Now, you can start playing with wallet and home screen Interfaces that show the user at a glance:
- What they’ve got left to spend for that month
- What they already spent
- What they earned (relative to that budget and/or a specific earnings goal)
I’m currently exploring this design space myself and the extent to which Freedom Tech budgeting can be gamified in novel ways will make TikTok look boring.
![Baby UI steps](https://cdn.satellite.earth/6532d52485e54d079f55c0b872b70cb399522d031cef1b314f0fb0f621cd6100.png)
> Some baby UI steps in this direction
But that’s just the budget part of course. We still need non-intrusive ways to display all these little price-tags for things if we’re not hiding them away behind a subscription.
The good news is that, when it comes to movies, music, articles, posts, FOSS apps and all other types of information that can easily be copied, it doesn’t have a price tag. It just has people zapping it. People that can use a lot more context and recognition than they currently get. (showing top zappers everywhere and letting creators reply to zaps being just a humble start for that)
For the stuff that does have price-tag, even the most obvious answer isn’t as bad I though it would be:
Just put it on the button.
The bigger the sum, the bigger the button.
![Bigger sum = Bigger button](https://cdn.satellite.earth/c6893502abae2183987dc01ec7b5b62e3bf4838a80b162c8643f8cb758653843.png)
eCash is what can make all of this work as a one-button action anyway, removing most of the friction. A budget, on the other hand, is what can remove most of the worrying. Color indications, permission prompts for higher amounts, etc etc… can all work in tandem for this.
With these examples I’m mostly trying to give you an idea of what is still left largely unexplored territory.
A territory that we will have to go and explore in any case.
A territory where Communities are easier to monetize then Content is.
A territory where I can count the current number of active designers on both hands.
A territory that can desperately use a Design Community.
You can guess twice what’s next on my todo-list….
-
*Yesterday's edition https://stacker.news/items/611566/r/Undisciplined*
@ek makes an appearance on the leaderboard and Warren Buffet makes an impressively foolish statement that will likely live on for eternity.
*Bookkeeping note: I've gotten more zaps the past few days, so I'm going to bump my zaps to top posts and comments up to 69. It would be nice to just be able to forward sats to more than 5 people.*
* * -
### July 18, 2023 📅
----
### 📝 `TOP POST`
**[Phoenix Beta Wallet Migration](https://stacker.news/items/211009/r/Undisciplined)**
#### Excerpt
> As you may already know since it was mentioned on SN [here](https://stacker.news/items/207349), Phoenix released a new blog post last week to announce the new Phoenix wallet: [a 3rd generation self-custodial Lightning wallet](https://acinq.co/blog/ph […]
*3519 sats \ 14 comments \ @ek \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/210873/r/Undisciplined?commentId=210963**
#### Excerpt
> A mining node can only "censor" any block it mines. A miner with huge percent of hashrate could mine 6 blocks in a row somewhat regularly.
*2819 sats \ 0 replies \ @nullcount*
From **[Can Bitcoin miners that are routing nodes exploit their peers/channel partners?](https://stacker.news/items/210873/r/Undisciplined)** by @02ca839694 in ~bitcoin
----
### 🏆 `TOP STACKER`
2nd place **@ek** (1st hiding, presumed @siggy47)
*1637 stacked \ 2261 spent \ 1 post \ 6 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*58.9k stacked \ 0 revenue \ 70.9k spent \ 94 posts \ 264 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 18, 2022 📅
----
### 📝 `TOP POST`
**[What do I need this bitcoin for? 🔦](https://stacker.news/items/46555/r/Undisciplined)**
#### Excerpt
> Famous investor Warren Buffett recently said:
> #### *"If you offered me all the bitcoins in the world for $25, I wouldn't take it."*
*112.5k sats \ 29 comments \ @hynek \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/46555/r/Undisciplined?commentId=46604**
#### Excerpt
> I am really starting to get tired of the bitcoin isn't an inflation hedge, lol it stops you from dilution IN THE BITCOIN network, it doesn't control whats going on in secondary markets, gosh everytime I have to explain that one it does my head in lol […]
*564 sats \ 2 replies \ @TheBTCManual*
From **[What do I need this bitcoin for? 🔦](https://stacker.news/items/46555/r/Undisciplined)** by @hynek in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@cryptocoin**
*618 stacked \ 375 spent \ 5 posts \ 25 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*119.4k stacked \ 0 revenue \ 124.9k spent \ 60 posts \ 194 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 18, 2021 📅
----
### 📝 `TOP POST`
**[All-time Bitcoin price chart (fork of zorinaq's)](https://stacker.news/items/387/r/Undisciplined)**
Link to https://price.bublina.eu.org/
*546.3k sats \ 4 comments \ @spain \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/325/r/Undisciplined?commentId=386**
#### Excerpt
> Sure. To answer such questions you definitely need JavaScript client-side analytics. But there may be some open source libraries for that? But surely it may be easier to use a ready-made 3rd party one, just wondering how much it can cost.
*1 sat \ 1 reply \ @spain*
From **[Opinions on analytics for Stacker News?](https://stacker.news/items/325/r/Undisciplined)** by @k00b in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@spain**
*7 stacked \ 6 spent \ 1 post \ 2 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*40 stacked \ 0 revenue \ 78 spent \ 6 posts \ 16 comments*
originally posted at https://stacker.news/items/612715
-
Hi all!
Lately I've been working on a small python project to try approaching the pathfinding problem in lightning as a Min-cost network flow problem.
I know Renè Pickhardt already used this approach in order to prove the superiority of the Zero base fee approach, in fact I read his papers and got the gist of it.
Now I'm trying to model the problem mathematically - using the pyomo package in python - but I came up with a question that I couldn't find an answer for:
**in a multi-path payment (MPP), can two HTLCs go though the same channel?**
More specifically, I'm interested in understanding if the following situation can happen or not:
```
Nodes = {S, A, B, C, E, F, G, I, D} # S source, D destination
```
With S being the source node (aka the payer), D being the destination node (aka the payee).
The scope of the question does not include considerations about base or rate fees.
Node S computes the optimal path that minimises the fees for an amount and finds out that the optimal choice is to execute a MPP that follows this path(s):
```
B E
A -> -> -> G -> D
C F
h0 h1 h2 h3 h4
```
where the first two hops (h1 and h2) see two parallel payments going on, then at the hop h3 the MPP converges to the node G and then procedes with the last hop to destination.
As far as I know, in classical MPP the preimage is the same for every parallel path (correct me if I'm wrong), thus I'm wondering if that thing is possible or not. Moreover, if there was an another hop such that:
```
B E
A -> I -> -> -> G -> D
C F
h0 h1 h2 h3 h4 h5
```
would that be possible too? That meaning two HTLCs flowing in the same channel for the first hop, then splitting and the riconverging.
Hope the question is clear enough, otherwise I'm open to clarifications.
Thanks!
originally posted at https://stacker.news/items/612642
-
**"ผลกระทบของน้ำยาบ้วนปากต่อจุลชีพในช่องปากและร่างกาย: มุมมองใหม่จากการวิจัยล่าสุด"**
ในช่วงหลายทศวรรษที่ผ่านมา น้ำยาบ้วนปากได้กลายเป็นส่วนหนึ่งของกิจวัตรการดูแลสุขภาพช่องปากของคนจำนวนมาก โดยมีจุดประสงค์หลักเพื่อลดกลิ่นปาก ป้องกันฟันผุ และรักษาสุขภาพเหงือก อย่างไรก็ตาม การวิจัยล่าสุดได้เผยให้เห็นถึงผลกระทบที่ไม่คาดคิดของการใช้น้ำยาบ้วนปากเป็นประจำ โดยเฉพาะอย่างยิ่งต่อระบบนิเวศของจุลชีพในช่องปากและร่างกาย
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/b34b440824d517ec4da6ac67f3197dbc9f03d82d70fdeb7f4b77909bacfb9667/files/1721273510342-YAKIHONNES3.webp)
การศึกษาล่าสุดที่ตีพิมพ์ในวารสาร Journal of Medical Microbiology เมื่อวันที่ 4 มิถุนายน 2024 โดย J.G.E. Laumen และคณะ ( https://www.microbiologyresearch.org/content/journal/jmm/10.1099/jmm.0.001830 )ได้ศึกษาผลของการใช้น้ำยาบ้วนปาก Listerine Cool Mint เป็นประจำทุกวันต่อจุลชีพในช่องปากและลำคอ
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/b34b440824d517ec4da6ac67f3197dbc9f03d82d70fdeb7f4b77909bacfb9667/files/1721273188372-YAKIHONNES3.png)
ผลการวิจัยพบว่าการใช้ Listerine เป็นเวลา 3 เดือนทำให้เกิดการเปลี่ยนแปลงอย่างมีนัยสำคัญในองค์ประกอบของจุลชีพในช่องปากและลำคอ โดยเฉพาะอย่างยิ่ง การเพิ่มขึ้นของแบคทีเรีย Fusobacterium nucleatum และ Streptococcus anginosus ซึ่งเป็นเชื้อที่ก่อให้เกิดปัญหาสุขภาพหากมีปริมาณมากเกินไป
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/b34b440824d517ec4da6ac67f3197dbc9f03d82d70fdeb7f4b77909bacfb9667/files/1721273244426-YAKIHONNES3.png)
โดยเชื้อทั้งสองตัวนั้นมีผลต่อสุขภาพดังนี้
Fusobacterium nucleatum:
เป็นเชื้อแบคทีเรียที่พบได้ตามปกติในช่องปาก แต่หากมีปริมาณมากเกินไปจะก่อให้เกิดปัญหาได้
มีความเกี่ยวข้องกับโรคปริทันต์ (โรคเหงือกอักเสบ)
มีการศึกษาพบความสัมพันธ์ระหว่างเชื้อนี้กับมะเร็งลำไส้ใหญ่
มีส่วนในการกระตุ้นการอักเสบในร่างกาย ซึ่งอาจเชื่อมโยงกับโรคทางระบบอื่นๆ
Streptococcus anginosus:
เป็นส่วนหนึ่งของกลุ่ม Streptococcus milleri ซึ่งพบได้ในช่องปาก ลำคอ และทางเดินอาหาร โดยปกติไม่ก่อโรค แต่ในบางกรณีอาจกลายเป็นเชื้อฉวยโอกาส
สามารถก่อให้เกิดการติดเชื้อในช่องปาก เช่น ฝีในช่องปาก
ในกรณีที่รุนแรง ทำให้เกิดการติดเชื้อในกระแสเลือด หรือฝีในอวัยวะภายใน เช่น สมอง ปอด หรือตับ
ผลการวิจัยนี้ทำให้เกิดคำถามสำคัญเกี่ยวกับกลไกการทำงานของน้ำยาบ้วนปากและผลกระทบระยะยาวต่อสุขภาพ เมื่อพิจารณาส่วนประกอบหลักของ Listerine Cool Mint (รวมถึง น้ำยาบ้วนปากอื่นๆด้วย) ซึ่งประกอบด้วย
1. น้ำ (เป็นส่วนประกอบหลัก)
2.แอลกอฮอล์ (มักจะอยู่ที่ประมาณ 21.6%)
3.น้ำมันหอมระเหยจากพืช ได้แก่:
Eucalyptol (น้ำมันยูคาลิปตัส)
Menthol (เมนทอล)
Thymol (ไทมอล)
Methyl salicylate (เมทิลซาลิไซเลต หรือ น้ำมันวินเทอร์กรีน)
4.สารให้ความหวาน (มักใช้ซอร์บิทอล)
5.กรดเบนโซอิก (สารกันบูด)
6.สารให้สี (อาจใช้สีเขียว)
7.โซเดียมซักคาริน (สารให้ความหวาน)
8.โซเดียมเบนโซเอต (สารกันบูด)
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/b34b440824d517ec4da6ac67f3197dbc9f03d82d70fdeb7f4b77909bacfb9667/files/1721273554283-YAKIHONNES3.jpg)
จะพบว่าแต่ละสารมีส่วนทำให้เกิดการเสียสมดุลของจุลชีพในช่องปาก:
1. แอลกอฮอล์และน้ำมันหอมระเหย:
มีฤทธิ์ฆ่าเชื้อที่รุนแรง ทำลายทั้งแบคทีเรียที่เป็นโทษและเป็นประโยชน์ เปิดโอกาสให้เชื้อบางชนิดเติบโตได้มากขึ้น
2. สารกันบูด:
ยับยั้งการเจริญเติบโตของจุลินทรีย์บางชนิดมากกว่าชนิดอื่น
3. สารให้ความหวาน:
เป็นแหล่งอาหารสำหรับแบคทีเรียบางชนิด หรือเปลี่ยนแปลงสภาพแวดล้อมในช่องปาก
ผลการวิจัยนี้ได้ท้าทายความเชื่อดั้งเดิมเกี่ยวกับประโยชน์ของน้ำยาบ้วนปาก ในอดีต น้ำยาบ้วนปากถูกส่งเสริมว่ามีประโยชน์ในการลดกลิ่นปาก ป้องกันฟันผุ และช่วยรักษาโรคเหงือก อย่างไรก็ตาม การค้นพบใหม่นี้บ่งชี้ว่า ประโยชน์ระยะสั้นเหล่านี้อาจไม่คุ้มค่ากับผลเสียที่อาจเกิดขึ้นในระยะยาว การเสียสมดุลของจุลชีพในช่องปากอาจนำไปสู่การเพิ่มขึ้นของแบคทีเรียที่เป็นอันตราย ซึ่งจะเพิ่มความเสี่ยงต่อโรคต่างๆ ทั้งในช่องปากและระบบอื่นๆ ของร่างกาย เช่น โรคปริทันต์ มะเร็งหลอดอาหารและลำไส้ใหญ่ รวมถึงโรคทางระบบอื่นๆ
บทความนี้เน้นย้ำความสำคัญของการพิจารณาอย่างรอบคอบในการใช้ผลิตภัณฑ์สุขภาพช่องปาก และ ผู้บริโภคควรตระหนักถึงความสำคัญของการรักษาสมดุลของจุลชีพในช่องปากและพิจารณาทางเลือกอื่นๆ ในการดูแลสุขภาพช่องปากที่อาจมีผลกระทบน้อยกว่าต่อระบบนิเวศของจุลชีพ เช่น การแปรงฟันอย่างสม่ำเสมอ การใช้ไหมขัดฟันอย่างถูกวิธี และการรับประทานอาหารที่มีประโยชน์ต่อสุขภาพช่องปาก
![image](https://yakihonne.s3.ap-east-1.amazonaws.com/b34b440824d517ec4da6ac67f3197dbc9f03d82d70fdeb7f4b77909bacfb9667/files/1721273600336-YAKIHONNES3.jpeg)
ซึ่งการรับประทานอาหารที่ดีต่อสุขภาพช่องปาก เป็นส่วนสำคัญในการดูแลสุขภาพฟันและเหงือก รวมถึงการรักษาสมดุลของจุลชีพในช่องปากและร่างกายด้วย ซึ่งประกอบด้วย:
**อาหารธรรมชาติที่มีการแปรรูปน้อย
อาหารที่มีโพรไบโอติกส์และพรีไบโอติกส์
ช่วยรักษาสมดุลของจุลชีพที่ดี เช่น โยเกิร์ต คีเฟอร์ กิมจิ ชีส
อาหารที่กระตุ้นการหลั่งน้ำลาย:
น้ำลายช่วยชะล้างแบคทีเรียและเศษอาหาร เช่น เนื้อสัตว์ ไข่
นอกจากนี้ การเคี้ยวอาหารให้ละเอียดและใช้เวลาในการรับประทานอาหารช้าๆ ยังช่วยกระตุ้นการหลั่งน้ำลายและส่งเสริมสุขภาพช่องปากที่ดีอีกด้วย
น้ำเปล่า:
ดื่มน้ำสะอาดอย่างเพียงพอช่วยชะล้างช่องปากและรักษาความชุ่มชื้น**
อาหารที่ควรหลีกเลี่ยงหรือจำกัดคือ
** อาหารและเครื่องดื่มที่มีแป้ง/น้ำตาลสูง ซึ่งส่งเสริมการเจริญเติบโตของแบคทีเรียที่ก่อให้เกิดฟันผุ และการอักเสบ รวมถึงอาหารที่ผสมสารต่างๆที่กล่าวมาก่อนหน้า**
การรับประทานอาหารที่สมดุลและหลากหลายไม่เพียงแต่ส่งผลดีต่อสุขภาพช่องปากเท่านั้น แต่ยังช่วยส่งเสริมสุขภาพโดยรวมของร่างกายอีกด้วย
หากยังต้องการใช้น้ำยาบ้วนปาก ควรพิจารณาใช้เฉพาะเมื่อจำเป็น ไม่ใช้เป็นประจำทุกวัน และอาจเลือกน้ำยาบ้วนปากที่ไม่มีแอลกอฮอล์ รวมถึงสารต่างๆอย่างที่กล่าวไปก่อนหน้า
ในท้ายที่สุด การวิจัยนี้เน้นย้ำถึงความสำคัญของการไม่ยึดติดกับความเชื่อเดิมๆ โดยไม่พิจารณาหลักฐานใหม่ทางวิทยาศาสตร์ และวิธีการดูแลสุขภาพที่คำนึงถึงระบบนิเวศของจุลชีพมากขึ้น
-
NBC reports that a suspicious person was received an hour before a gunman opened fire at former President Donald Trump, a senator said.
U.S. officials learned of an alleged Iranian plot to kill Trump weeks before the attempted assassination in Butler, Pennsylvania, sources said, adding that nothing indicated the two were connected.
The Department of Homeland Security said its inspector general would investigate the Secret Service's security operation at Saturday's rally.
Secret Service Director Kimberly Cheatle said that "the buck stops with me" and that the assassination attempt Saturday "should have never happened."
That is to say, there are all kinds of threats to Trump's security. But why isn't enough being done to protect it? When Trump wins the election, will the Secret Service really take care of his security? Or are you going to continue ignoring reports and threats to your life? Do some groups want him dead and don't even bother to hide it? That is to say, it is obvious that they want it, but are there no plans to stop an assassination that is announced without any shame?
originally posted at https://stacker.news/items/612126
-
Beginnen wir mit dem Terra/Luna Fiasko. TerraUSD (UST) ist (war) ein auf Algorithmen basierender Stablecoin herausgegeben von LFG (Luna Foundation Guard). Dies bedeutet, dass ein Algorithmus automatisch einen Preisbildungsmechanismus betreibt, was wiederum bedeutet, dass UST an den US Dollar gebunden war, allerdings (und das ist der grosse Unterschied z.B. zu Tether) keine USD Vermögenswerte zugrunde liegen hatte, wie bei anderen Stablecoins (angeblich) der Fall ist. Im Fall von UST gab es einen zweiten Token (LUNA), der Preisschwankungen beim UST absorbieren sollte. Ein spezieller Burn/Mint Mechanismus sollte so Angebot und Nachfrage ausbalancieren.
Es gab einen gezielten Angriff auf das Terra Netzwerk nachdem im März angekündigt wurde, dass Terra Bitcoins im Wert von 10 Milliarden USD als Währungsreserve gekauft habe. Im April hatte Terra für über 40.000 USD gekauft. Terra war nach dem Angriff gezwungen seine Bitcoins (für ca. 32.000 USD) zu verkaufen, um die Bindung an den US Dollar Basiswert aufrecht zu erhalten. Dieses Entladen von 42.530 Bitcoins in den Markt führte zu einem weiteren Absinken der Kurse aller Kryptowährungen (denn wir wissen, wenn der OG Bitcoin steigt, steigen alle Shitcoins mit, wenn Bitcoin sinkt, fallen alle anderen auch).
Wie oben schon erwähnt führte die Aneinanderreihung von Effekten dazu, dass es einen Überschuss von LUNA gab, was weiter zum Kursverfall führte und im Endeffekt als Resultat hat, dass LUNA (Stand heute) 0.000185 USD "wert" ist (im Vergleich zu fast 100 USD im April).
![](https://image.nostr.build/ef642842847f90c2e2c55a28772536bb762e25860b057ce61a773c1d1c76072e.png)
Die wichtigsten Takeaways von diesem Event sind meiner Meinung nach die folgenden:
Es wird in Zukunft mehr und strengerer Regulierung bedürfen.
Das Vertrauen in automatische auf Algorithmen basierenden Stablecoins dürfte zunächst bis auf weiteres stark gelitten haben.
Die Verflechtungen von Kryptowährungen, Tokens und auch Bitcoin sollten nicht unterschätzt werden. Der Preisverlust von Bitcoin war der Todesstoß für Terra, aber auch Terras Ausverkauf hat zu einem weiteren Kursrutsch bei Bitcoin beigetragen.
Wie oben schon angerissen, muss sich Bitcoin von "Crypto" emanzipieren. Die News rund um Terra und die damit einhergehenden schlimmen Meldungen über Selbstmorde, ruinierte Existenzen, etc. fallen auch auf Bitcoin zurück. Schlechte News sind wie Öl auf das Feuer aller Bitcoin Gegner und wenn keine haltbaren Anschuldigungen bei Bitcoin selbst gefunden werden um FUD zu schüren, werden News aus dem breiten Spektrum der Shitcoins herangeholt.
Terras Blockchain wurde angehalten. Von Entwicklern. Angehalten. Wofür stand DeFi nochmal? Eben.
Und damit genug von Terra & Luna und auf zu Sol(ar) .... sorry :(
Ich habe mich letzte Woche ein wenig ausführlicher mit dem Thema Bitcoin Mining mit Solarenergie in Deutschland auseinandergesetzt. Sogenanntes Behind-the-meter Mining ist einer der wichtigsten Bausteine in der Umkehr des Narrativs Bitcoin Mining sei energieineffizient und klimaschädlich. Viel wird darüber geschrieben, dass Bitcoin pro Jahr Energie in der Höhe einzelner kleinerer Länder verbraucht. Es wird im gleichen Atemzug erwähnt, dass Bitcoins zugrundeliegender Wertschöpfungsmechanismus Proof of Work (PoW) der Grund allen Übels sei und durch Proof of Stake (PoS) "Mining" ersetzt werden solle. Sogar eine eigene Kampagne von Greenpeace unter dem Namen "Change the Code" / "Clean-up Bitcoin" setzt sich momentan dafür ein Bitcoins Code zugunsten PoS zu verändern.
🌐 [CHANGE THE CODE](https://netzpolitik.org/2022/change-the-code-umweltsuende-bitcoin)
Das ist natürlich albern. Denn wenn auch große Mengen von Energie benötigt werden, um Bitcoin zu Minen (ich werde in diesem Zusammenhang nicht das deutsche Wort Schürfen verwenden - es hört sich für mich unästhetisch an), so wird diese Energie in ein Produkt umgewandelt. Niemand würde auf die Idee kommen BASF oder Apple vorzuwerfen, sie würden zu viele Rohstoffe verbrauchen. Denn in dieser Gleichung ist jedem bewusst, dass zur Herstellung eines Produkts immer ein Vorprodukt, ein Rohstoff oder eben Energie benötigt wird. Output benötigt Input. Im Fall vom Bitcoin Protokoll ist dieser Output bitcoin, also die Coins, und der Input ist Energie in der Form von Strom.
Jetzt lässt sich natürlich über das Produkt streiten. Viele Menschen sehen in Bitcoin keinen Mehrwert und würden deshalb der These widersprechen, der hohe Energieverbrauch sei durch die Produktion dieser Coins gerechtfertigt. Allerdings würde sich auch niemand hinstellen und behaupten, dass alle von BASF hergestellten Produkte für ihre Leben relevant sind und damit in Frage stellen, welche die Umwelt belastende Produktionen dem öffentlichen Wohl dienen und damit als verantwortbar gelten sollten und welche eben nicht.
Wer in dem Produkt Bitcoin nicht ein souveränes, transparentes und völlig dezentrales Geldsystem, in dem jeder volle Kontrolle über seinen Geldfluss, die Authentizität des erhaltenen Geldes und den Verbleib seines Ersparten hat, ohne dass Dritte Einfluss nehmen, Kontrolle walten oder Verbote aussprechen können, sollte sich nochmal Schlafen legen, nochmal neu aufstehen und nochmal von vorne denken. Energie ist wichtig für Menschen. Eine erhöhte Stromproduktion korreliert mit einem erhöhten menschlichen Wohlstand.
Das führt direkt zum nächsten Punkt. Erhöhte Stromproduktion ist an sich nicht schlecht - im Gegenteil. Allerdings ist auch sehr wichtig, aus welchen Quellen diese zusätzliche Energie gewonnen wird. Geopolitische und klimabedingte Auswirkungen haben in den letzten Jahren gezeigt, dass ein Umdenken und ein Umschwenken (nicht zu jedem Preis und nicht ohne eine Grundsicherung zu gewährleisten) durchaus sehr vernünftig sind. Immer mehr private, aber auch semi-professionelle Gruppen investieren in Photovoltaik Equipment, um den Eigenbedarf zu decken, oder zumindest Stromkosten zu senken und im Falle von semi-professionellen Anlagen, um natürlich auch den ein oder anderen Euro hinzuzuverdienen. Mit einem semi-professionellen Anbieter habe ich mich über die Nachhaltigkeit und Wirtschaftlichkeit von Bitcoin Mining in so einer 1MWh unterhalten und zusammen haben wir ein paar Rechenbeispiele durchgespielt.
Um eine Mining Operation in der Theorie zu planen, gilt es sich die verschiedenen Komponenten und Variablen anzuschauen. Ziel ist es eine Projektion zu erstellen, die Aufschluss darüber gibt, ob ein solches Unterfangen überhaupt profitabel sein kann.
Bei den Komponenten und Variablen handelt es sich zunächst um die verschiedenen (1) Hardware Komponenten (also Solaranlage, Stromspeicher, Mining Equipment und Peripherie), die (2) stromseitigen Variablen (also Sonnenstunden, Einspeisungsvergütung und Länge der Abschreibung) und die (3) finanziellen bzw. Bitcoin-seitigen Variablen (also Hashrate und Difficulty, Bitcoin Preis, Block Reward und Pool Fee).
🖨️ Fangen wir mit Hardware Komponenten an:
In dem vorliegenden Szenario handelt es sich um eine Solaranlage, die in der Lage ist 1 MWh (Megawattstunde) zu produzieren. Die Anlage existiert bereits und darum will ich hier nicht weiter auf Anschaffungskosten, Wartung, etc. eingehen.
Also Annahme 1️⃣ für diese Projektion ist, dass Strom hergestellt werden kann, natürlich bis auf die anfallenden Eigenkosten und die Variable Einspeisungsvergütung, auf die wir später zu sprechen kommen.
Bei dem Mining Equipment sind wir von einem Bitmain Antminer S19 XP ausgegangen, der eine Leistung von 140TH/s (Hashrate) erzeugt bei einem Verbrauch von 3040 W, oder ca. 3kW. Dieser Miner kostet zur Zeit etwa um die 10.000 Euro. Auch hier muss man wieder ein Caveat aussprechen, da die derzeitige weltwirtschaftliche Lage und die alzu beliebten Lieferkettenengpässe bei Halbleitern besonders bei ASIC (application-specific integrated circuit) Chips dazu führen, dass das Equipment entweder deutlich teurer wird, sich die Wartezeiten ins Unendliche ziehen, oder sogar dazu führen, dass gänzlich andere Hardware berücksichtigt werden muss.
Also Annahme 2️⃣ ist, dass das Wunsch-Equipment relativ problemlos erworben werden kann.
Stromseitig muss außerdem ein Stromspeicher berechnet werden, da das Mining Equipment im besten Fall zu 100% der Zeit ausgelastet werden soll und dafür tagsüber Strom gespeichert werden muss, der nachts und im Winter an sonnenarmen Tagen die direkte Versorgung mit Sonne überbrücken kann. Da im Dezember die Zeit zwischen Sonnenauf- und Sonnenuntergang lediglich 8 Stunden beträgt, müssen die übrigen 70% der Zeit aus dem Speicher geliefert werden. Bei 16 Stunden a 3 kW benötigt man also einen 50 kW Speicher, der zur Zeit in der Anschaffung mit 25.000 Euro zu Buche schlägt.
Also in Annahme 3️⃣ gehen wir davon aus, dass ein Speicher mit 50 kW Speichervermögen benötigt wird und ähnlich wie beim Miner auch problemlos zu kaufen ist.
Bei der Peripherie sprechen wir von Unterstellmöglichkeiten, LAN Verbindung, und weiterem Equipment, dass für diese Back-of-the-envelope Rechnung mal ausser Acht gelassen werden kann. Also für Annahme 4️⃣ erstmal egal.
🌦️ Soviel also zur Hardware. Weiter zu den stromseitigen Variablen:
Die Jahreswerte für Sonnenstunden, also volle Stunden, die zur 100%igen Auslastung eines Solarpanels führen, lassen sich problemlos online abrufen. Bei 8760 Stunden, die so ein Jahr hat, beläuft sich die Anzahl der Sonnenstunden im Mittelwert auf ca. 1525 pro Jahr, was einem Anteil von 17.4% gleichkommt. Also ganz vereinfacht gesagt, ein Fünftel des Jahres gibts Sonnenschein 😎
Dieser Mittelwert schwankt natürlich extrem über das Jahr hinweg, da die sonnenreichsten Monate (Mai, Juni, Juli) etwa ein vierfaches der Sonnenstunden der sonnenärmsten Monate (November, Dezember, Januar) aufweisen. Und da wir bei Sonnenstunden natürlich über eine naturgemachte und anders als bei anderen Erzeugungsmethoden absolut unbeeinflussbare Energiequelle sprechen, müssen wir nicht lange diskutieren, dass auch diese Werte nur Anhaltspunkte liefern und niemals Fixwerte darstellen. Zusätzlich zu den "vollen" Sonnenstunden kommt, dass eine Solaranlage natürlich nicht nur ganz-oder-gar-nicht kennt, sondern schon in der Morgendämmerung anfängt Strom zu gewinnen, nur eben nicht in der vollen Kapazität, wie bei richtigen Sonnenstunden.
Das heißt für Annahme 5️⃣ gehen wir rechnungsbedingt vom pessimistischen unteren Rand aus. Wahrscheinlich wird de facto mehr aus der Anlage zu holen sein, allerdings ist das gar nicht mal so wichtig, wenn wir sowieso einen 50 kW Speicher anschaffen müssen. Denn ob dieser am Ende des Tages die vollen 50 oder nur 30 liefert ist egal; der Anschaffungspreis bleibt der gleiche und zusätzliche laufende Kosten erzeugt er dadurch nicht.
Der so hergestellte Strom hat also eingepreisten Kosten (Equipment und OPEX) von ca. 6 cent pro kWh. Der Break Even (Preis unter dem die Herstellung von Bitcoin profitabel ist) darf bei einer Modellrechnung also auf gar keinen Fall unter den Selbstkostenpreis fallen. Allerdings ist es damit nicht getan. Also Stromerzeuger bekommt man nämlich auf dem freien MArkt eine Einspeisevergütung, also einen Preis pro kWh, ausbezahlt, dafür dass man das Netz mit seinem Strom beliefert. Diese Einspeisevergütung liegt zur Zeit bei ca. 10 bis 12 Cent, was wiederum bedeutet, dass dies der neue Break Even ist. Also zusammengefasst, muss man pro kWh aufgewendeten Strom mindestens 12 Cent in Bitcoin generieren, damit sich der Aufwand überhaupt annähernd lohnt. Annahme 6️⃣ setzt also voraus, dass der Break Even mindestens 12 Cent beträgt - und das ist schon SEHR optimistisch geschätzt. Der Hashpreis errechnet sich aus der Hashrate und dem Bitcoin Preis. Wie in der Grafik zu sehen ist liegt dieser momentan sehr niedrig. Dazu aber mehr in der Rechnung.
![via Bitcoin Magazine](https://image.nostr.build/d5c11255e8c18747113fcc31009b90b2f13e1d61b06d8155173d18d951194604.jpg)
Zum Thema Abschreibung halte ich mich kurz. Eine kurze Recherche zu dem Thema hat ergeben, dass eine neue Rechtsprechung erlaubt / vorgibt, Computer und Hardware Equipment im ersten Jahr vollständig abzuschreiben. Das bedeutet buchhalterisch ist das komplette Mining Equipment nach einem Jahr wertlos und in der Bilanz der Mining Operation nicht mehr zu berücksichtigen. Das ist also Annahme 7️⃣: Volle Abschreibung des Equipments in einem Jahr.
![](https://media.tenor.com/images/e591f38672e1471fb64039e4473abc8f/tenor.gif)
Bis dahin ist mir bei der Recherche rund um das Thema Energie die Verteuerung aller Produkte, aber insbesondere von Energie in Deutschland in die Arme gefallen. Da ich hier auch immer ein bisschen Meinung und These einbauen möchte also ein kurzer Überblick.
Besonders Energiepreise haben sich in den letzten 5 Jahren bis zum Mai 2021 (also genau ein Jahr her) im einstelligen Prozentbereich mal hoch und mal runter bewegt. Seit Mai 2021 ist prozentuale Anstieg der Kosten zum ersten Mal über 10% und bleibt seit dem positiv und zweistellig, um dann sogar zum abgelaufenen April einen Wert von über 87% zu erreichen.
![](https://image.nostr.build/848e8a7dec043c0a5daa4fe5b0e09bdb5a2386df66ca16fe33bfd53eea0ca110.png)
![](https://image.nostr.build/193eec39f203bb20cb0777e629605441dadd788294407595c83b492112df1023.png)
![](https://image.nostr.build/5bfcf3eae67f430fcca73d151e5093e60a0075187c6690e10d1434ad0ead93ba.png)
Der Situation in Russland die Schuld für diesen drastischen Anstieg der Energie- und Erzeugerkosten zuzuschreiben wäre faul und und falsch. Natürlich sind der Krieg in der Ukraine, die damit verbundenen Lieferengpässe über weite Industrien hinweg, und darüber hinaus auch vom Westen verhängte Sanktionen ein Katalysator der Preissteigerungen.
Allerdings, darf man nicht vergessen, dass dies ein über Jahrzehnte hausgemachtes Problem darstellt. Die Verantwortlichen für diesen rapiden und unhaltbaren Preisanstieg sitzen weder in Moskau, Kiew oder Peking, sondern bei uns in Deutschland. Die deutsche Energiepolitik verfolgt seit über 20 Jahren konsequent nur zwei Ziele: Den Ausstieg aus der Kernenergie und den Ausstieg aus dem Kohlestrom. Investitionen in erneuerbare Energie werden zwar getätigt, aber nicht so konsequent und medienwirksam, wie der viel leichter vermarktbare Ausstieg aus den "dreckigen" Energiequellen. Dieser lässt sich mit Klimazielen, Umweltpolitik und unter ESG Mänteln sehr schön und werbewirksam vermarkten. Der Fakt, dass Deutschland mal Marktführer in der Photovoltaik war und sein gesamtes Know-How and China verloren hat, passt dabei nicht so recht ins Bild. Das heißt, wenn wir unseren Energiebedarf in Deutschland weder mit Atom-, Kohle-, noch erneuerbarem Strom vernünftig decken können, woher nehmen wir ihn dann? Genau, wir müssen ihn teuer einkaufen.
Das Problem haben wir in Deutschland also selber erzeugt. Die systematische Stilllegung zuverlässiger Energiequellen im Laufe der Jahre hat erhebliche Auswirkungen auf die Stromkosten für die Industrie und die Haushalte im Land, die in Deutschland zu den höchsten in der entwickelten Welt zählen, und macht die Menschen und den Staat somit extrem abhängig von Energiequellen im Ausland, wie z.B. aus Russland oder Katar. Somit hat die deutsche Politik die Abhängigkeit von Russland und die damit verbundenen gesteigerten Kosten geschaffen, und nicht umgekehrt. Komplexe Systeme, wie die Energieversorgung eines Staates, müssen gesteuert, überwacht und reguliert werden, dazu sollten Politiker und Berater befähigt werden. Jedoch ist eine zentrale Planung dieser Systeme nicht möglich, oder um mit Ludwig von Mises zu zitieren:
> The fundamental objection advanced against the practicability of socialism refers to the impossibility of economic calculation. [...]. A socialist management of production would simply not know whether or not what it plans and executes is the most appropriate means to attain the ends sought. It will operate in the dark, as it were. It will squander the scarce factors of production both material and human (labor). **Chaos and poverty for all will unavoidably result**.
![](https://media.tenor.com/images/2bd863dc97d086c734b539a18b2ccbd0/tenor.gif)
💵 Nun endlich die finanziellen bzw. Bitcoin-seitigen Variablen.
🚂 Hashrate und Difficulty
Die Hashrate bezeichnet die dem Bitcoin Netzwerk zur Verfügung gestellt Rechenleistung. Jeder Mining Node hat eine unterschiedliche Rechenleistung und die Hashrate summiert die gesamte Leistung, die gegenwärtig vorhanden ist. Es ist also keine im Protokoll vorgegebene Größe, sondern ein Abbild der Ist-Situation. Insgesamt bleibt festzuhalten, dass immer mehr Rechenleistung hinzu kommt, da das Mining immer mehr professionalisiert wird und auch die im letzten Post beschriebenen ASIC Chips immer leistungsfähiger werden. Wie bereits im letzten Post erwähnt gehen wir davon aus, dass unsere Operation mit einem Bitmain Antminer S19 XP arbeitet. Annahme 8️⃣ ist, dass wir eine Leistung von 140TH/s erzeugen.
![](https://image.nostr.build/42672d4a6bf03e102d037b7bcf5578054df6220e7da421054736d5d475a4e4bf.png)
Die steigende Hashrate hat direkten Einfluss auf die Difficulty. Bei der Difficulty handelt es sich um eine in Bitcoins Konsens-Algorithmus eingebaute Logik zur Erhaltung einer stetigen Blockzeit. Anders als die Hashrate ist die Difficulty also eine im Protokoll veranlagte Variable. Neue Blocks sollen ca. alle 10 Minuten der Chain hinzugefügt werden und um sicherzustellen, dass die Rate, mit der neue Blöcke erstellt werden, konstant bleibt und proportional zur Hashrate (Rechenleistung) ist, die dem Netzwerk gewidmet ist. Einfach gesprochen soll das "Rätsel" bei vielen Teilnehmern (Nodes) nicht zu einfach, und bei weniger Nodes nicht zu schwierig sein, und somit wird die Schwierigkeit (Difficulty) regelmäßig vom Protokoll angepasst. Das bedeutet, dass jedes Mal, wenn neue Miner dem Netzwerk beitreten und die Hashrate und damit auch der Wettbewerb zunimmt, auch die Difficulty steigt, wodurch verhindert wird, dass die durchschnittliche Blockzeit abnimmt. Andersherum sinkt die Difficulty, wenn Miner das Netzwerk verlassen. Dadurch bleibt die Blockzeit konstant, obwohl dem Netzwerk mehr oder weniger Rechenleistung gewidmet wird. Eine Anpassung der Schwierigkeit findet alle 2.016 Blocks statt (oder alle 20.160 Minuten = 14 Tage). Die Difficulty ist bisher mit Unterbrechungen immer weiter gestiegen, und seit dem ersten gemineten Block in 2009 ist die Schwierigkeit von 1 auf heute ca. 30 Billionen (Englisch Trillions) geklettert. Deshalb ist davon auszugehen, dass die Schwierigkeit auch weiterhin zunehmen wird.
![](https://image.nostr.build/80e70a884b7ee8d2ca8e245533aac104b601851de3473d666580b67a9d503d89.png)
Für unsere Rechnung gehen wir davon aus, dass sich die Schwierigkeit jedes Jahr verdoppelt, also einem Schwierigkeitsgrad von 100% jedes Jahr als Annahme 9️⃣.
📈 Bitcoin Preis
Der Preis ist wohl die am einfachsten zu verstehende Variable. Den tagesaktuellen Bitcoin Kurs kennt mittlerweile sogar jede Großmutter, er wird auf allen Seiten jeglicher Finanzdienstleister angezeigt, und große Schwankungen sind meistens sogar einen Bericht in der Presse wert. Diese Schwankungen, z.B. der kürzliche Kursverlust von über 50% seit ATH (All time high) letztes Jahr, machen die Rechnung für jede Mining Operation gerade so kompliziert. Denn der Preis ist natürlich ausschlaggebend für die Profitabilität, allerdings lässt er sich genau so gut vorhersagen, wie die nächsten sechs Richtigen im Lotto. Diese Planungsunsicherheit macht das ganze Unterfangen viel risikoreicher als herkömmliche Operationen, bei denen der Preis des hergestellten Produkts einer gewissen Preisstabilität unterliegt, oder maximal im einstelligen Prozentbereich nach oben oder unten ausschlagen kann. Das Ziel der meisten Miner ist es auch so viele bitcoins wie möglich in der Bilanz zu halten, wegen der großen möglichen positiven Preisentwicklung. So wird versucht nur die Anzahl an bitcoins zu verkaufen, die nötig ist, um die Kosten zu decken, denn jeder bitcoin, der heute nicht verkauft wird, könnte morgen schon ein vielfaches wert sein. Oder aber genau andersrum, was einen weiteren Risikofaktor darstellt.
In diesem Zusammenhang interessant zu beobachten ist, dass ein sinkender oder horizontal handelnder (vergleichsweise niedriger) Preis nicht unbedingt bedeutet, dass viele Miner ihr Equipment abstellen. So kann man in dem Chart hier unten sehen (BTC Price vs Hash Rate), dass am 16. Mai eine Hashrate von 258.04 EH/s (1 Exa Hash per Second = 1.000.000.000.000.000.000) erreicht wurde, was sehr nahe dem ATH für Hashrate liegen dürfte.
![](https://image.nostr.build/122d17ea298acf252db180ae03800bc256dd1fa04c5518289e1485cbdc9e1b48.png)
Für unsere Analyse haben wir konservativ rechnen wollen, weshalb wir von einem Mittelwert von 30.000 Euro ausgegangen sind, da die Anlage so zeitnah wie möglich in Betrieb genommen werden sollte (falls profitabel). Annahme 1️⃣0️⃣: Bitcoin Preis = 30.000 Euro.
💰 Block Reward
Beim Mining sammeln die verschiedenen Nodes (Miner-Knoten) mm Netzwerk Transaktionen und bündeln diese zu Blocks. Als aller erstes, fügt der Miner-Knoten dem neuen Block jedoch eine Transaktion hinzu, bei der er sich selbst den Block Reward sendet, also eine Belohnung für die getane Arbeit. Diese Transaktion, auch Coinbase-Transaktion genannt, ist eine spezielle Art von Transaktion, bei der Coins „aus dem Nichts“ geschaffen werden. Der Block Reward ist also einer von zwei Wegen (noch!) mit dem Mining Geld zu verdienen (die andere Quelle ist die Mining Fee, also eine Gebühr, die bei jeder Transaktion zu entrichten ist). Noch in Klammern weil sich der Block Reward alle 210.000 Blocks halbiert, bis er schlussendlich keine relevante Größe mehr besitzt und demnach gleich Null ist. Zur Zeit liegt der Reward bei 6.25, was bedeutet, dass der erfolgreiche Miner für den neu erstellten Block 6.25 bitcoins (zuzüglich der Fees) erhält. Das nächste sogenannte Halvening findet in 2024 statt, wonach der Reward also auf 3.125 gesenkt wird. Das Protokoll rechnet nicht in Zeitabständen, sondern in Blocks. Da allerdings, wie oben erwähnt, die Blockzeit relativ konstant bei 10 Minuten pro Block gehalten wird, lässt sich rechnerisch ermitteln, wann die jeweiligen Halfenings eintreffen (und sogar wann ungefähr der letzte, einundzwan21gmillionste bitcoin gemined wird - Spoiler Alert: gar nicht. Durch das Halbieren wird es niemals dazu kommen, dass der einundzwan21gmillionste bitcoin gemined werden wird. Allerdings wird es einen letzten bitcoin geben und dieser wird voraussichtlich im Jahr 2140 das Licht des Lebens erblicken). Da wir uns einen kurz- bis mittelfristigen Planungshorizont ausgelegt haben, gehen wir als Annahme 1️⃣1️⃣ von einem Block Reward von 6.25 BTC aus.
🏊♀️ Pool Fee
Zunächst mal muss ich hier das Konzept des Mining Pools erklären. Am einfachsten ist es einen Pool mit einer Lotto Spielgruppe zu erklären. Der Mining Prozess ähnelt einem Glücksspiel, bei dem jede//r Spieler//in eine möglichst kleine Zahl erraten muss, und je mehr Rechenleistung ein//e Spieler//in hat, desto häufiger darf diese//r raten bzw. desto schneller kann sie//er hintereinander raten. Um die Chancen zu verbessern schließt man sich also einer Spielgruppe bzw. einem Mining Pool an. Denn so kann man auch mit kleinerer Rechenleistung seine Chancen auf einen Gewinn erhöhen. Der Mining Pool bündelt die Rechenleistung seiner Mitglieder und suggeriert dem Netzwerk so, dass es sich um einen Teilnehmer mit enorm hoher Hashrate handelt. Bei erfolgreichem Minen eines Blocks, werden dann die Gewinne (Block Reward plus Transaktionsgebühr) vom Pool auf seine Mitglieder anteilig der zur Verfügung gestellten Rechenleistung aufgeteilt. Bei der Lotto Spielgruppe würde also der Jackpot auf die Anzahl der Spieler//innen und von der//dem jeweiligen Spieler//in bezahlten Reihen aufgeteilt. Für diese administrativen Aufgaben behält der jeweilige Poolbetreiber in der Regel eine Gebühr ein, meist um die 2%, jedoch ist das von Anbieter zu Anbieter unterschiedlich. Wie bereits angesprochen, wird das Mining immer professioneller und die Difficulty immer schwieriger. Deshalb ist es einer kleinen Mining Operation mit verschwindend geringer Hashrate, im Vergleich zum Gesamtaufkommen, sogut wie unmöglich ohne einen Pool zu arbeiten. Die Größe der Pools werden in der nächsten Grafik sehr gut deutlich, da man hier schön sieht, dass mehr als die Hälfte der gesamten Hashrate unter den sechs größten Pools aufgeteilt wird.
![](https://image.nostr.build/013f5e5ae16af3d72eb0f1c494704e42df97fb46096411d6cd329db85df90364.png)
Also, für unsere Analyse nehmen wir also als Annahme 1️⃣2️⃣ eine Pool Fee von 2% an.
🧾 Womit wir endlich zur Rechnung bzw. dem Zusammenziehen aller Annahmen kommen.
Annahme 1️⃣: Strom ist vorhanden
Annahme 2️⃣: Mining Equipment ist im Handel vorrätig
Annahme 3️⃣: Ein Stromspeicher mit 50 kWh Kapazität wird benötigt
Annahme 4️⃣: Peripherie ist vorhanden bzw. vernachlässigbar
Annahme 5️⃣: Wir rechnen mit 1525 Sonnenstunden pro Jahr
Annahme 6️⃣: Der Break Even beträgt 12 Cents
Annahme 7️⃣: Volle Abschreibung des Equipments in einem Jahr
Annahme 8️⃣: Erzeugte Rechenleistung von 140TH/s
Annahme 9️⃣: Schwierigkeitsgrad von 100% jedes Jahr
Annahme 1️⃣0️⃣: Bitcoin Preis = 30.000 Euro
Annahme 1️⃣1️⃣: Block Reward von 6.25 BTC
Annahme 1️⃣2️⃣: Pool Fee von 2%
Die Berechnung / Projektion ist jetzt eigentlich gar nicht mehr so schwierig. Nachdem man für sich alle Annahmen getroffen hat und diese mit den tatsächlichen Umständen / der Realität abgestimmt hat, bedarf es jetzt keinen großen mathematischen Fähigkeiten mehr. Stattdessen gibt es für diese Zwecke sogenannte Profitability Rechner, mit denen sich relativ schnell verschiedene Szenarien bei abweichenden Variables durchspielen lassen. Einen dieser Rechner stellt Braiins.com zur Verfügung, einer der führenden Anbieter für Mining Software. Trägt man nun die oben gesammelten Variablen in diesen Rechner ein, kalkuliert dieser anhand der der aktuellen Difficulty u.a. Gewinne/Verluste, den Break Even, die Anzahl gemineter bitcoins und auch den Cash Flow.
Gibt man die Variablen nun in den Profitability Rechner ein und schaut sich nur die nächsten 12 Monate an, so bekommt man das folgende Bild.
![](https://image.nostr.build/60af78ba9ba26b9fe0bb8623b5b06631492f8433437b56825101bbc8a3f65702.png)
Was hier gut zu erkennen ist, ist zunächst mal ein End Profit von 1.526 Euro im Jahr. Juhu, denkt man sich also, wir haben Gewinn gemacht. Allerdings ist dieser (1) sehr schmal und somit den Aufwand eigentlich schon gar nicht wert, denn wir haben in unserer Rechnung noch gar keine Arbeitsstunden oder weitere Overheads berücksichtigt, und (2) sind 12 Cents pro kWh ziemlich günstig und eine Anpassung auf nur 18 Cents sorgen schon dafür, dass der Gewinn zu einem Verlust wird.
Weiterhin müssen wir sehen, dass unsere Hardware Value von 35.000 Euro innerhalb nur eines Jahres auf ca. 20.000 gefallen ist, da wir das Mining Equipment im ersten Jahr voll abschreiben müssen und ich für den Speicher mal 5 Jahre veranlagt habe. Unser Cashflow liegt also bei -33.124 Euro, da wir das ganze Equipment im ersten Jahr erstmal angeschafft haben.
Jetzt kann man natürlich überlegen, an welchen Stellschrauben muss ich drehen, um mir ein Szenario zu basteln, das für mich sehr dankenswert ausgeht, ohne gleichzeitig absurd und komplett aus der Luft gegriffen scheint. Also nehmen wir an, der Bitcoin Preis steigt wieder auf sein voriges ATH und läge somit bei ca. 60.000 Euro und meinen Strom bekomme ich deutlich günstiger, also für 6 Cents pro kWh. In solch einem Szenario generieren wir dann tatsächlich ca. 7.800 Euro an Gewinn. Das heißt, wenn wir davon ausgehen, dass das Equipment 5 Jahre lang hält, hätten wir sogar einen kleinen Gewinn zu verbuchen und unsere Investitionskosten wieder raus.
Wenn Mining so eine schlechte Marge hat, warum sollte jemand überhaupt in das Geschäft einsteigen fragt man sich da konsequenterweise. Zunächst Mal gibt es viele Länder auf der Welt, in denen Elektrizität immer noch sehr günstig zu bekommen ist, das steigert natürlich die Marge. Außerdem muss man an die Skalierbarkeit des Mining Equipments denken. Die Großzahl aller ASIC Chips (wenn nicht alle) wird in Asien hergestellt, was bedeutet, dass Mining Rigs dort auch viel günstiger zu kaufen sind, und weitere Betriebskosten (Personal, Miete für die Unterbringung, Wasser für die Kühlung, etc.) dort auch vergleichsweise günstig sind. Hinzu kommt, dass viele der großen Mining Operationen seit mehreren Jahren aktiv sind und dementsprechend von höheren Block Rewards profitieren konnten und viele Investitionen in Equipment dadurch auch schon wieder wettgemacht haben.
Insgesamt bleibt aber festzuhalten, dass Bitcoin Mining kein leichtes und vor allem kein besonders profitables Unterfangen darstellt. Besonders in Deutschland führen die hohen Anschaffungskosten für die Hardware, sowie der hoch subventionierte Strom dazu, dass es sich lohnt seinen gewonnenen Solarstrom ins Netz einzuspeisen. Überschüssiger Strom kann leider durch Bitcoin Mining noch nicht kostengerecht umgewandelt und gespeichert werden (Alternativlösung ist die Umwandlung in Wasserstoff, allerdings ist auch dies eine teure Methode), sodass eine Hybridlösung, in der zwischen Einspeisevergütung zu Stoßzeiten und Bitcoin Mining zu Stromüberflusszeiten gewechselt wird, keinen profitablen Mehrwert erzeugt.
---
Der Nutzen bei Bitcoin ist gegeben und rechtfertigt den Stromverbrauch. Welchen genauen Nutzen Bitcoin hat, ist ein sehr individuell zu betrachtendes Argument. In entwickelten Ländern und für Sparer, Investoren und Händler bietet Bitcoin z.B. Inflationsschutz und noch viel weiter gedacht könnte Bitcoin in Zukunft den Dollar als Leitwährung ablösen. In armen, viel weniger entwickelten Ländern wird der Nutzen viel kleiner gedacht. Geld ist dort wenig vorhanden und das Senden von solchem aus reicheren Ländern ist extrem teuer. Der Markt für globale Rücküberweisungen (Remittance market) wird auf 450 Milliarden Dollar jährlich geschätzt (zusätzlich einer Dunkelziffer von geschätzten 250 Milliarden Dollar) und die Kosten, die bei diesen Rücküberweisungen anfallen, liegen oft in zweistelligen Prozentbereichen. Dazu kommt, dass viele Menschen einfach keinen direkten Zugang zu Banken haben. Man geht davon aus, dass ca. 30% aller Erwachsenen auf der Welt unbanked sind. Banking the unbanked, also Menschen den Zugang zum Finanzsektor zu ermöglichen ist einer dieser, wenn nicht sogar Bitcoins größter, Nutzen.
![](https://image.nostr.build/54aa256a8aa403824bc71bb5e936f9bf0ae00243e98ed20d8f65f61c4b16a215.png)
Das heißt eine Debatte darüber, ob das Bitcoin Protokoll von PoW auf PoS umgestellt werden sollte, oder darüber ob Bitcoin Mining per se umweltschädlich ist, oder ob sogar sauberer Strom, der beim Mining genutzt wird nicht anderweitig besser verwendet werden könnte, muss nicht geführt werden. Die Frage, die sich stellt, ist ob Mining mit erneuerbaren Energien profitabel erreicht werden kann und ob dies hier in Deutschland möglich ist.
Bei der Profitabilität eines Mining Betriebs auf Solarenergie kann man letztlich davon ausgehen, dass 6 Cent pro kWh bei einem deutlich höheren Bitcoin Preis erreicht werden müssen damit sich der Aufwand und die Investition lohnen. Der globale Mining Betrieb wird immer weiter professionalisiert, was den kWh Preis zusätzlich nach unten treibt. Dazu kommen steigende Difficulty Adjustments, sowie sinkende Block Rewards. Heutzutage sind fossile Brennstoffe weltweit oft noch günstig genug, dass sich an diesen Stellen das Bitcoin Mining lohnt. Allerdings gibt es insgesamt eine starke Tendenz hin zu erneuerbaren Energien und es ist davon auszugehen, dass schon heute über 50% des Mining Betriebs auf erneuerbaren bzw. anderweitig ungenutzten Energien läuft.
Dadurch dass sich der Kostenbereich, in dem sich Bitcoin Mining lohnt immer weiter nach unten verschoben wird, erzeugt Bitcoin eine Incentivierung den überproduzierten Strom, der vom Stromnetz nicht absorbiert werden kann, zu verwenden. Anders als bei der Idee der Energiespeicherung in Wasserstoff, ist Bitcoin Mining eine viel transportablere und direktere Art der Energiespeicherung, da die überschüssige Energie nicht in einem weiteren Produkt wie Wasserstoff, sondern direkt in Geld, gespeichert wird. Diese Incentivierung jegliche Erzeugung von Strom profitabler zu gestalten und dauerhaft Strom zu erzeugen (selbst wenn der erzeugte Strom weniger brächte als die Subventionen und dadurch oft Anlagen ausgeschaltet werden) ist ein klassischer marktwirtschaftlicher Vorgang. Denn ein wichtiger Punkt ist die Subventionierung von Energie in Deutschland. Hierbei wird die Herstellung von erneuerbaren Energien vom Staat unterstützt wird, auch wenn z.B. gar keine Sonne scheint und somit die Solaranlage keinen Strom produziert.
Insgesamt ist eine Tendenz hin zu 100%igem behind-the-meter Mining zu beobachten, also dass Bitcoin ausschließlich an Stellen geminet wird, an denen über produzierter Strom vorhanden ist, der sonst nicht ins Stromnetz eingespeist würde. Damit wären den Kritikern alle Argumente genommen, da auch nicht argumentiert werden kann, dass selbst bei Mining auf Basis von erneuerbaren Energien, dieser Strom anderweitig genutzt werden könnte und somit andernorts wieder fossile Brennstoffe benötigt werden.
Es bleibt also festzuhalten, dass der gesamt Mining Betrieb immer effizienter und ressourcenschonender wird, dabei aber auch immer kompetitiver und somit immer weniger profitabel. Wenn es sich also nicht mehr lohnt selber zu minen, bedeutet dass nicht nur, dass die Einspeisevergütung oder Subventionierung höher ist, sondern auch, dass das nicht investierte Geld einfach direkt in Bitcoin gesteckt werden könnte um damit höhere Erträge zu erzielen.
Ein immer professionalisierterer Mining Betrieb hat aber auch eine Clusterung zu folge, da sich immer stärker nur die großen Mining Operationen und Mining Pools langfristig durchsetzen, die mit viel niedrigeren Margen arbeiten können, als z.B. semi-professionelle Operationen. Anders als in den Kindertagen von Bitcoin, als Mining noch aus den Kellern und Kinderzimmern mit Spielegrafikkarten betrieben wurde und damit eine wirkliche Dezentralisierung der Hashrate garantiert wurde, führt die Clusterung dazu, dass eine kleinere Gruppe von Entscheidungsträgern größere politische und technische Einflussnahme erreicht. Das kann z.B. bei der Einführung von Updates im Protokoll nicht zu vernachlässigende Folgen haben.
🫳🎤
---
In diesem Sinne, 2... 1... Risiko!
![](https://media.tenor.com/images/4ae424f8d8ea36e86169862d84d1b31e/tenor.gif)
-
Bitcoiners have been saying this for a while. The firearms community needs to adopt bitcoin before they have no other options. Few have headed this advice. I went to YouTube today a saw a video from a very popular firearms training channel called Hickok45. The released a video titled [Bad News - YouTube](https://www.youtube.com/watch?v=-KWxaOmVNBE) where they let their subscribers know about the uncertainty of their future due to YouTube's new policy banning firearms advertising.
YouTube is of course within their rights to do this but that's not a solution to the problem. The problem goes deeper. Credit card processors and companies as well as banks have and could increase their censorship of firearms purchases. Bitcoin fixes this.
As far as the video hosting problem there are other platforms that will likely open their doors to these trainers and educators but that isn't good enough. Nostr is new but I see it as the most hopeful solution coupled with bitcoin over lightning. Over the past week I've been experimenting with [satellite.earth](https://satellite.earth/) CDN file hosting service. I've been pretty impressed by it. Its very new but it seems like a real solution to the file hosting centralization problem for Nostr.
I've tried in the past to make the case for bitcoin to gun owners and few seem to see the problem. I think this is largely due to their faulty belief in the state. There are some companies that have adopted bitcoin. The only one that I'm aware of is [Fenix Ammunition](https://fenixammo.com/). Anyone aware of others?
originally posted at https://stacker.news/items/612067
-
These are my very preliminary QB rankings, my only research being the [RotoWire depth charts](https://www.rotowire.com/football/nfl-depth-charts/) and player notes.
**Tier 1**
**Josh Allen, Jalen Hurts**
I don’t know who Allen will throw to other than Dalton Kincaid, but he needs only 25-ish TD passes to be the QB1 if he stays healthy, given all the rushing numbers. Hurts is second only because he’s not as good in real life, and there’s some small risk of implosion, skills-wise. But 10-plus TDs on the ground creates its own tier.
**Tier 2**
**Patrick Mahomes, Anthony Richardson, Lamar Jackson**
What’s odd about this tier is how different these players are. Mahomes, despite a poor 2023 regular season, gets in because he’s the GOAT, and Andy Reid is a arguably the greatest offensive coach of all time. Richardson could easily be the 1.1, but injury risk and lack of track record give him a much lower floor. Jackson doesn’t get the goal line looks that often, especially with Derrick Henry in tow, but obviously his running gives him a huge lift.
**Tier 3**
**CJ Stroud, Kyler Murray, Jordan Love, Dak Prescott**
Stroud has a loaded receiving corps now with Stefon Diggs on the team, and he’d be my bet to lead the league in passing yards. But he doesn’t run much, and that caps his ceiling slightly. I almost put him in Tier 2 because you can make the case he’s basically Mahomes with 200 fewer rushing yards. Murray is another year off his ACL tear, runs a lot and gets Marvin Harrison. It’s still an open question whether he’s good, though. Love looks like a player, runs a little and has young, developing wideouts. Prescott is in a pass-heavy offense, has an elite WR1 and is the ultimate stat-padder which works in fantasy. (You could just push all four into Tier 2 and make it seven deep, as the cutoff is a bit arbitrary.)
**Tier 4**
**Joe Burrow, Justin Herbert, Brock Purdy, Kirk Cousins, Tua Tagovailoa, Jared Goff, Trevor Lawrence**
This is the tier of the passing-only QBs with the exception of Herbert who can run, but might be hamstrung by a run-heavy offense, and Lawrence, who might not be good. I would gamble on Herbert’s athleticism and skills nonetheless if he falls into this tier. And Lawrence gets a much-needed rookie deep threat which could open things up. Burrow is Tier 3 if he would stay healthy, but he obviously carries more risk on that front, and I don’t think he’ll run much now.
**Tier 5**
**Deshaun Watson, Daniel Jones, Matthew Stafford, Aaron Rodgers, Caleb Williams, Sam Darnold, Jayden Daniels, Baker Mayfield, Geno Smith**
Watson might be cooked, but there’s still upside if he can run like he used to. Jones has a big ceiling with his mobility and Malik Nabers on the team. Stafford has two top receivers and a great offensive coach. Rodgers is old, but he has weapons now, and Williams has a loaded receiving corps. People might laugh at Darnold in this tier, but he’s mobile and has Justin Jefferson, Jordan Addison and T.J. Hockenson. If he keeps the job (a big if), he has upside, which is what you want in this tier. Mayfield and Smith are solid and boring, which is also fine if you wait forever on QB. Daniels’ mobility puts him here instead of the scrub tier.
**Tier 6 (Scrubs)**
**Russell Wilson, Derek Carr, Will Levis, Drake Maye, Raiders QBs, Bo Nix**
These are the scrubs. One or two might not turn out to be scrubs, but it’s hard to get excited about any of them.
These rankings will change, but I like to get my preliminary ones down on paper before I get influenced by ADP and training camp hype.
-
These are my very preliminary TE rankings, my only research being the [RotoWire depth charts](https://www.rotowire.com/football/nfl-depth-charts/) and player notes.
**Tier 1 **
**Sam LaPorta**
He gets his own tier because he’s the only TE for whom there are no red flags. Most TEs do very little as rookies, while LaPorta was a rare exception and should only get better. His competition for targets is perfect — a true No.1 receiver, a play-making field-stretcher and a speedy running back, i.e., defenses have to account for multiple threats, but it’s not too crowded for LaPorta to see 140-odd targets in Year 2.
**Tier 2 **
**Mark Andrews, Dalton Kincaid, Evan Engram, Trey McBride, Travis Kelce, George Kittle **
Kelce is the fantasy GOAT, but he turns 35 in October, and age catches up to everyone eventually. Andrews gets hurt a lot, but he should still be Lamar Jackson’s favorite target when healthy. Kincaid might lead the Bills in targets, and in Year 2 could take the leap to TE1. But unlike LaPorta, he hasn’t quite done it yet. Engram caught 114 passes last year, and Calvin Ridley is gone, so I don’t see his role diminishing much. McBride is a rising star in a good situation with only Marvin Harrison, Jr. to compete with for targets. And Kittle is a star when healthy and could see more looks if Brandon Aiyuk is traded.
**Tier 3**
**T.J. Hockenson, Kyle Pitts, Jake Ferguson, David Njoku, Cole Kmet, Brock Bowers**
There’s no doubt about Hockenson’s talent, but the downgrade to Sam Darnold or J.J. McCarthy could be steep. Pitts can *still* be the 1.1 with a competent quarterback, but at this point there’s *some* O.J. Howard risk (great athlete, average NFL player.) Ferguson is just in a great spot with the Cowboys and their thin group of pass catchers after CeeDee Lamb. Njoku looked like a guy who belonged in the first round seven years ago, and it’s not that unusual for TEs to take some time to peak. (Njoku took longer than most, but like Engram, some of that was probably due to poor QB play early on.) Kmet’s got a lot more competition for targets, but also a likely QB upgrade, and he’s still young and on the upswing. Bowers is a home run swing — he might do nothing as a rookie, but could also be the Raiders 1A to Davante Adams out of the gate.
**Tier 4 **
**Dalton Schultz, Dallas Goedert, Pat Freiermuth, Jonnu Smith**
If I were to get shut out from the top-three tiers, I’d be okay with any of these guys. Schultz will have a role in a top passing offense, even if it’s a secondary one. Goedert has never really put it together, but is still in a good spot in the Eagles top-heavy offense. Freiermuth is a good pass catcher and could see more opportunities with the trade of Diontae Johnson. Smith is a wild card. His floor is basically zero as he could be used to block, but his upside is Tier 2 if the Dolphins indeed make him the No. 3 option in their fast-paced offense.
**Tier 5 (scrubs) **
**Hunter Henry, Chigoziem Okonkwo, Tyler Conklin, Mike Gesicki**
You never know, one of them might bust out, and there are others not listed, but it would be hard to count on these guys. Gesicki intrigues me a bit since he used to be productive, and the Bengals are thin after their star receivers, but they never seem to use the TE much.
-
*Yesterday's edition https://stacker.news/items/610267/r/Undisciplined*
* * -
### July 17, 2023 📅
----
### 📝 `TOP POST`
**[Five Economic Terms You Should Know When Learning About Bitcoin](https://stacker.news/items/210219/r/Undisciplined)**
![economics](https://imgprxy.stacker.news/N5oemexK-8pmqb4-avH0_zZlynIjmjX-GEicMn54Yk0/rs:fit:600:500:0/g:no/aHR0cHM6Ly91cGxvYWQud2lraW1lZGlhLm9yZy93aWtpcGVkaWEvY29tbW9ucy8yLzIwL0Vjb25vbWljcy5qcGc)
*16.5k sats \ 19 comments \ @siggy47 \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/210392/r/Undisciplined?commentId=210400**
#### Excerpt
> I'm not 100 percent sure this will interest you, but I stumbled upon this podcast episode yesterday. I am pretty ignorant about China, but this guest gave me an incredible lesson about Chinese culture and politics. Bitcoin is discussed, but there's s […]
*571 sats \ 3 replies \ @siggy47*
From **[Any research on "Bitcoin Culture" in China?](https://stacker.news/items/210392/r/Undisciplined)** by @nullcount in ~bitcoin
----
### 🏆 `TOP STACKER`
2nd place **@nerd2ninja** (1st hiding, almost certainly @siggy47)
*1038 stacked \ 3620 spent \ 0 posts \ 20 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*39.4k stacked \ 0 revenue \ 49.5k spent \ 99 posts \ 308 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 17, 2022 📅
----
### 📝 `TOP POST`
**[Ask SN: How to use Bitcoin if it gets banned](https://stacker.news/items/46266/r/Undisciplined)**
#### Excerpt
> How should somepony use Bitcoin if its gets entirely banned?
Is LN practicable anymore then? i mean Smartphones and PC's would get hacked and Wallets drained (Canada)?
*199 sats \ 31 comments \ @pony \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/46223/r/Undisciplined?commentId=46329**
#### Excerpt
> My paper got accepted for a conference!
*362 sats \ 1 reply \ @random_*
From **[Daily discussion thread](https://stacker.news/items/46223/r/Undisciplined)** by @saloon in ~null
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*15.5k stacked \ 15.1k spent \ 0 posts \ 9 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*20.4k stacked \ 0 revenue \ 29k spent \ 66 posts \ 233 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 17, 2021 📅
----
### 📝 `TOP POST`
**[How the U.S. became the world's new bitcoin mining hub](https://stacker.news/items/381/r/Undisciplined)**
Link to https://www.cnbc.com/2021/07/17/bitcoin-miners-moving-to-us-carbon-footprint.html
*4 sats \ 2 comments \ @shawnyeager \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/373/r/Undisciplined?commentId=375**
#### Excerpt
> The main point: you commit outbound liquidity and you get inbound liquidity from someone in your ring. It's a nice hack on liquidity trying to solve the same problem dual-funded channels or Pool solves.
*1 sat \ 1 reply \ @sha256*
From **[lightning liquidity via a ring of fire introduction service](https://stacker.news/items/373/r/Undisciplined)** by @el_zonte in ~bitcoin
----
### 🏆 `TOP STACKER`
No top stacker
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*12 stacked \ 0 revenue \ 31 spent \ 4 posts \ 7 comments*
originally posted at https://stacker.news/items/611566
-
**阿巴拉契亚的话题最近很火 我也来谈谈我在那里的四年生活**
![阿巴拉契亚的话题最近很火 我也来谈谈我在那里的四年生活](https://external-preview.redd.it/ItLi-EUtZFJpJpy5WXAaqoOU02NfQj0O9f3Zm5NrRcY.jpg?width=320&crop=smart&auto=webp&s=65f1d6a6bfd12b88732a8e3dddf8b755d28f97e4) (https://www.reddit.com/r/China_irl/comments/1e50743/阿巴拉契亚的话题最近很火_我也来谈谈我在那里的四年生活/)
如题。楼主15岁时来美国读高中,我的学校就位于阿巴拉契亚山脉,号称Allegheny山脉以西最古老的预科学校,也是州内最好的私立高中,学校甚至还有专门的校车路线每天单程四十分钟跨州去接其他城市的day student。如果去看Niche或者reddit上的评价,当地人都会说academically challenging并且升学很好。具体情况暂不表。本文不提美国高中比较普遍的文化问题,皆在作为一个outsider观察所谓rust belt区居民的精神与生活状态。
刚来读书的时候学校所在城市人口还能排到州内第三,刚刚去查已经第四了 (https://preview.redd.it/qdfq6iflgxcd1.png?width=380&format=png&auto=webp&s=6e06e8545b71eed4cfe38faaef6b7145feb3d20e)
学校所在的城市曾是19世纪末美国人均最有钱的城市,但现在去downtown却只能看到一两百年前留下来的各式各样的厂房废墟。由于煤炭和重工业的衰退,城市的就业在各类服务业中达成了一个奇妙的平衡:教育和健康占比gdp 19%,政府15%,贸易交通和公用事业21%,休闲娱乐12%。其实也是国内资源枯竭型城市的一个future outlook了。如此的数据也导致pandemic期间城市内失业率达到了将近16%,而现在最新数据只有3.7%。
23年十月的最新downtown街景,得益于拜登的Building a better America grant,在修路 (https://preview.redd.it/wnu9molznxcd1.png?width=1196&format=png&auto=webp&s=2e2196580945ff8549bf9e2f6bcd8d33d9be2eb9)
废弃工厂 (https://preview.redd.it/o44t4hl7nxcd1.png?width=2544&format=png&auto=webp&s=8c801c1cb7d47613300b28387f2c2c884b1bdeff)
回到学校内,因为学校白人比例过高,作为一个私立学校,即使当地人实际完全不在乎,但学校多年来一直维持一位黑人老师的传统。同理,学校会付全奖给巴哈马国际学生请他们来上学,我们中国学生戏称进口黑人。当然,这一切的一切都只是表面功夫。没有人真正在乎diversity,也没有人在乎自己bubble之外的世界。即使学生的家长们人均会计师律师医生,甚至我的某位当地同学会开着当时最新款的s500来上学,同学们的活动范围依旧局限于tri-state area:度假去virginia beach,购物就近解决。即使每年走读学费要两万美元,依旧只有寥寥几人去过DC纽约,更别提什么加州欧洲甚至更远的地方了。
这些所谓中上层阶级的阿巴拉契亚居民怀揣着他们american small town centric的生活方式,那就更不用提J.D. Vance等种种没有特权的当地白人了。无知,迷茫和傲慢是当地特产,于是在学校被问中国有没有DQ是日常,在超市里被人说you are in america so speak english 更是日常。那种红脖式的传统教育更是在21世纪显得匪夷所思:boarding director会无视欧洲学生吸大麻但按着没要permission出门钓鱼的中国学生的头往墙上砸并让他们go back to China,学生早上迟到会被强迫把学校垃圾桶搬到校后河边倒进河里,目的是被河对岸超市顾客怒骂以“长记性”。
作为曾经的军校,为延续传统,每届freshman感恩节之前一周会被送到一个手机信号只有一格的户外中心做团建,爬树,攀岩,在山里负重徒步,坐在落叶堆上写日记,如此等等。当然,手机每天是要被收走的,每两天固定时间可以去食堂旁边用十五分钟跟家里发条消息。16年11月川普当选时,我们学生们围在common room的那台radio旁边,在这样的播音效果 (https://www.youtube.com/watch?v=kkc5ApocI7Q&ab_channel=Serj)里等来了川普当选的消息。我无法形容当地学生听到消息开始欢呼跳舞的情形,甚至到了晚上洗澡时间可以看到橄榄球队alpha male们裹着浴巾在洗澡堂chest bump。唯有年级唯二的其中一个小黑(因为和我一样不受“主流学生”待见,给了我nword pass)独自黯然神伤。
在Junior year的Human Geo课上, 我们有机会和本州senator视频连线。作为班里唯一的中国人,我被老师强行安排当演员,提问他对中国的看法,于是尴尬狼狈不堪的站在摄像头前被狠狠china bad了十分钟。其他一些Environmental Science 或是Contemporary Issues课上更是充斥着什么联合国要强迫美国人不喝牛奶并且限制美国人聚集区域等Conspiracy Theory。btw,这个老白男老师是UPENN Chemistry毕业,因为成绩太差最后去iowa读了phd来高中教书。
最后的Senior year,作为本州最好的高中,抛开中国学生不谈,一届四五十个学生中,只有个位数当地学生真正意义上跨出了所谓的tri-state area,去的学校也更是USC或liberal arts这一类。剩下的大部分会继续留在小城里,在学校方圆几十英里内的几所大学或是去州立读一个学位。和J.D. Vance一样,我最好的当地白人朋友选择marine crop当兵,自此也就不再联系。当然,这位白人朋友也是校内被bully的对象之一,因为坊间传言他是被收养的俄罗斯弃婴。
至此,tldr,阿巴拉契亚区域的事实生活环境与国内的众多资源枯竭型城市类似,但多数居民依旧固步自封,怀揣着对生活变得日益艰难的不满,坚定的持有他们传统的教育和生活理念。与更普遍的传统美国小镇生活不一样的是,因为周边经济环境的持续衰退,这些阿巴拉契亚居民的生活实质上不可持续,面临死局。不过就像J.D. Vance一样,有能力并且真正有意愿跳出bubble打破幻想的,仅仅为极少数。
submitted by /u/Leading\_Software\_163 (https://www.reddit.com/user/Leading_Software_163)
\[link\] (https://www.reddit.com/r/China_irl/comments/1e50743/阿巴拉契亚的话题最近很火_我也来谈谈我在那里的四年生活/)\[comments\] (https://www.reddit.com/r/China_irl/comments/1e50743/阿巴拉契亚的话题最近很火_我也来谈谈我在那里的四年生活/)
#China_irl
https://www.reddit.com/r/China_irl/comments/1e50743/阿巴拉契亚的话题最近很火_我也来谈谈我在那里的四年生活/
-
Here are 5 of mine:
1. Choosing not to complete a 4 year degree.
2. Choosing to start a serious relationship with my girlfriend.
3. Choosing to learn everything I could about Bitcoin and choosing to save in it.
4. Choosing to quit my job and go all in on getting the absolute most out of my athletic career.
5. Choosing to travel to Thailand.
What are some of the most significant choices you made during your 20s?
originally posted at https://stacker.news/items/610740
-
*Yesterday's edition https://stacker.news/items/609082/r/Undisciplined*
Don't miss @siggy47's [Golden Oldies](https://stacker.news/items/610202/r/Undisciplined), today.
Financial housekeeping: I'm currently zapping the top posts and comments 42 sats each, while forwarding 10% to the top stackers, which tallies up to about 600 sats per post including the 202 for posting in ~meta. So far, This Day In has been roughly breaking even at those levels. If zaps increase on these posts, then I'll increase the amount going out to the OP's.
* * -
### July 16, 2023 📅
----
### 📝 `TOP POST`
**[What's wrong with you? ](https://stacker.news/items/209703/r/Undisciplined)**
No additional context. Let us know in the comments.
*1542 sats \ 63 comments \ @andrews \ ~meta*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/209703/r/Undisciplined?commentId=209797**
#### Excerpt
> Buried in grief from all the recent death in my life.
*419 sats \ 7 replies \ @elvismercury*
From **[What's wrong with you? ](https://stacker.news/items/209703/r/Undisciplined)** by @andrews in ~meta
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*5035 stacked \ 5150 spent \ 1 post \ 15 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*26.3k stacked \ 0 revenue \ 29.6k spent \ 70 posts \ 151 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 16, 2022 📅
----
### 📝 `TOP POST`
**[My life should not revolve around Bitcoin](https://stacker.news/items/45962/r/Undisciplined)**
#### Excerpt
> I can't stop thinking about Bitcoin while it is made so that I shouldn't have to think about it anymore. I waste my time listening to Bitcoin podcasts and watch Bitcoin Youtube videos and procrastinate an a Bitcoin social media. Why is that?
*1097 sats \ 51 comments \ @tomlaies \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/45962/r/Undisciplined?commentId=46014**
#### Excerpt
> It becomes all encompassing. Listen to some music again. After about 6 months of non stop podcast when I went back to just listening to music in the car,it was glorious, like finding the music all over again.
*430 sats \ 4 replies \ @falsefaucet*
From **[My life should not revolve around Bitcoin](https://stacker.news/items/45962/r/Undisciplined)** by @tomlaies in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*15.3k stacked \ 20.9k spent \ 2 posts \ 25 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*27.3k stacked \ 0 revenue \ 29.6k spent \ 80 posts \ 203 comments*
https://imgprxy.stacker.news/fsFoWlgwKYsk5mxx2ijgqU8fg04I_2zA_D28t_grR74/rs:fit:960:540/aHR0cHM6Ly9tLnN0YWNrZXIubmV3cy8yMzc5Ng
### July 16, 2021 📅
----
### 📝 `TOP POST`
**[Jack Dorsey Announces New Square Division To Build DeFi on Bitcoin](https://stacker.news/items/357/r/Undisciplined)**
Link to https://www.thestreet.com/crypto/bitcoin/jack-dorsey-announces-new-square-division-to-build-defi-on-bitcoin
*5 sats \ 8 comments \ @shawnyeager \ ~bitcoin*
----
### 💬 `TOP COMMENT`
**https://stacker.news/items/349/r/Undisciplined?commentId=371**
#### Excerpt
> Adding a math plugin to my markdown editor is on my todo list ... so I can show rather than tell what the ranking algo is among other things. […]
*10 sats \ 0 replies \ @k00b*
From **[Frequently Asked Questions](https://stacker.news/items/349/r/Undisciplined)** by @sn in ~bitcoin
----
### 🏆 `TOP STACKER`
1st place **@k00b**
*14 stacked \ 30 spent \ 1 post \ 8 comments \ 0 referrals*
----
### 🗺️ `TOP TERRITORY`
**~bitcoin**
> everything bitcoin related
founded by @k00b on Tue May 02 2023
*39 stacked \ 0 revenue \ 84 spent \ 4 posts \ 15 comments*
originally posted at https://stacker.news/items/610267
-
This is going to be a sort of long one (or maybe it won't?), because I have a lot of thoughts as a tech and social media enthusiast, and some of those thoughts have to do with how I feel about the direction that social media is moving in. How I feel about Mastodon, and Nostr, side-by-side, and what the biggest problems with Bluesky are.
So, let's jump right the heck in ...
_**Nostr, the misconceptions, and the truth**_
I recently wrote about [Nostr](https://cmdr-nova.online/2024/07/11/nostr-the-strangest-and-clunkiest-twitter-replacement/), and its relayed protocol of user-owned identity that you can take ... wherever. I outlined a lot of thoughts and impressions I initially had, and then what I wrote went to Reddit, and then it found itself on Nostr. It got there entirely outside my own involvement. I posted to Mastodon, and almost nowhere else.
This inspired me to log back in, and set some things up (such as domain verification from one of a few domains I own), and then I explored a bit. I interacted with people, participated in some community events that came up spontaneously, and really dug into the extreme multitude of features that run across the Nostr network.
Let's just say, I was pretty floored, and _some_ of the impressions I had were wrong. Such as thinking that a place that is more centered around the idea of lacking censorship, or robust moderation, _must_ be filled with toxic, horrible trolls. In the couple of days I've been messing with the network, I think I've muted like one person who said some off-the-wall shit in my notifications.
But ... I think Nostr has nasty people just in the same way Twitter, Threads, and Mastodon do. They exist, and they always will, no matter where you go.
Suffice to say, I learned a lot about what decentralization was, and now is. I was given [an article](https://shreyanjain.net/2024/07/05/nostr-and-atproto.html) by user nostr:npub1njst6azswskk5gp3ns8r6nr8nj0qg65acu8gaa2u9yz7yszjxs9s6k7fqx that gave some really in-depth information about the emergence of decentralization--Scuttlebutt, ActivityPub, and then ATProtocol, and Nostr.
I'm not going to lie, I originally started writing this as a hit piece against Bluesky, thinking their ATProtocol was just a riff of what Nostr was doing. But, _apparently_, both ATP and Nostr were developed independent of each other, and mostly without any knowledge of one or the other. I think that's ... actually pretty wild, and strange.
On the topic of decentralization, which is something I feel is integral to the future of the internet, I now understand Mastodon to be a place of islands, and decentralization that occurs in a way that's more like isolated communities talking to other isolated communities. Like the latter half of The Walking Dead.
In a way, it's decentralization, the half-measure. The full-measure, that comes with some iffy trade-offs some may not like, is Nostr, and ATProtocol.
You take your identity, your thoughts, your posts, and you move freely between pieces of software, and networks, and you lose _nothing_ (this is nearly the direct opposite of Mastodon, where moving to a new server means burning everything you've ever posted, to the ground). And, honestly, I'm kind of starting to feel like that's _how it should be_. The downside, is that, on Nostr, you have a public key, and a secret key. Your secret key is something you use to log in and sign events coming from your account, and your public key is basically your identity. That's not the iffy part, though. The iffy part, is that people can use your public key to see all of your data _except_ direct messages (which are encrypted).
Not entirely _too_ scary, unless you're doing a lot of weird things on your account. But definitely something you should know if you decide that this is a journey you want to take, and you're not jaded from hearing about how much Jack Dorsey loved Nostr and it's Bitcoin affiliation (a lot of people across different platforms hold a lot of dislike for the man).
_At this point, I'm less worried about the power consumption of BTC transactions, and have more shifted that focus to content farms from the likes of Microsoft and Nvidia buying up all the AI tech they can get their grubby little hands on_.
That aside, I'm not writing this to blow smoke up the Bluesky developer's butts. I, in fact, am not favorable of Bluesky and there are some specific reasons for that. Maybe this spells their downfall, or maybe they'll be a tight-knit community that doesn't really expand all that much, _forever_.
_**Bluesky, the Apple of social media**_
Bluesky is a place that a couple million people call home (I think, last I saw, it was around 6 million or so). There's no algorithm, and much like Nostr, you own your identity. Except, for now, that's _mostly_ tied to the website's central server.
Now, obviously, there are far less people populating Nostr, but Nostr and its relays are able, and _are_ connected to both ActivityPub and Bluesky (just, not through ATProtocol).
Most of what you'll see on bsky.app are quite a few furries, an actually impressive population of Second Life users, and _quite a lot_ of LGBTQ+ people. None of these things or communities are inherently bad. In fact, I think they're probably the _only_ reason Bluesky is really alive at all, today.
My angst and negative feelings about the direction of Bluesky have nothing to do with the LGBTQ+ community, or any other community residing on the platform.
The issues I mainly hold have to do with how far up their own asses the board and developers are, in regard to the platform, and its development over the past year or so. This is why I _kind of_ think of them as the Apple of social media. And you might think, "Hey, don't you _own_ like a billion pieces of Apple tech?"
Yeah, I do.
But this is more like if Apple skipped over having Steve Jobs and just went straight to some random guy who didn't know what he was doing. You know, like putting up a wall and locking out all potential users _for a year_, and keeping all new sign-ups under lock and key via exclusive codes. As you might imagine, having that walled-garden erected through six or seven different events where people were leaving Twitter in droves, _very likely probably_ worked _against_ the social network's best interest.
Mainly, because Threads came out of absolutely nowhere, and sucked up _most_ of those users.
That's only _half_ of the issue, though. The other part to all of this, is that the developers I see directly _on_ Bluesky do not recognize or acknowledge this _at all_. Paul Frazee, whose influence goes back quite a bit further than Bluesky, posts as though it's the greatest platform _ever made_ (maybe just because he's a developer for Bluesky). But the website, despite its six million some-odd users, _feels_ almost completely dead.
Which is _ridiculous_, because, as I've said, Nostr has far less users than that, and it most definitely doesn't feel dead when you post.
Not to mention, we're over a year into Bluesky, and it still, more or less, is propped up to look, feel, and act just like Twitter did in 2014. ATProtocol, in this respect, still feels mostly like an afterthought that's inaccessible to most users.
Meanwhile, if you _really_ like Nostr, you can get going with your own chunk of the network immediately, [with about a page](https://docs.soapbox.pub/ditto/install) of install commands.
It's this mixture of grandiosity that emanates from Bluesky, and the blunder of keeping their doors closed through one exodus after another, that I think they've shot themselves in the feet so much that they now don't have feet. They have stumps.
But, if you've followed me all the way through this article, I _think_ there's a way they can blow the doors open. But, then they'd have to sort of abandon their idea of the ATProtocol, and stop trying to be the center of social media they most definitely are not, and probably never will be. And, really, that's the final issue I have with Bluesky and the ATProtocol.
It _feels_ like they're trying to do Nostr, but be corporate-owned. Bluesky doesn't feel like it's owned by the people, developed by the people, and run by the people. It feels like it's run by some suits, who give the impression that the people will have their freedom, _as long as they say it's okay_.
And that, my buddy ol' pal, just ain't okay.
_**The elephant and the ostrich**_
Which brings us squarely back to Mastodon, and Nostr. Both platforms have their own merits. Nostr is about controlling your own content, and what you see, and largely eschews censorship to a high degree. But everyone can see almost everything you do and are talking about, whenever they want. Mastodon, on the other hand, puts the tools into a user's hands to _create_ a network, and build their own communities, while picking and choosing _who_ those communities interact with. A network that ... encourages users to police everyone around them ... which is how we end up with tyrannical admins acting like they work in a prison, and they've just been promoted to Warden.
If only there were some way to take the best parts of both of these animals, and make them _one_.
An elephrich.
For now, though, I am at least pretty content to screw around with both while I feel around and see what sticks. In the age of corporate control and censorship so heavy that people can't even say "kill" or "suicide" any more (these are only two of the most egregious examples I can think of), I think it's healthy to explore your options, and cement your identity, and who you are online, before everything else we've come to know is lost _completely_.
If that's something you care about, at least.