Vitalik's New Article: Decentralization Accelerationism One-Year Retrospective and Outlook

By: blockbeats|2025/01/06 03:30:04
0
Share
copy
Original Title: d/acc: one year later
Original Author: Vitalik Buterin, Ethereum Founder
Original Translation: Leek, Foresight News

Abstract: The article revolves around the concept of decentralized acceleration (d/acc), exploring its application in technological development and the challenges it faces, including AI security and regulation, the relationship with cryptocurrency, and public goods funding. It emphasizes the importance of d/acc in building a safer and better world and discusses the opportunities and challenges in its future development. The author deeply elaborates on the essence of d/acc, analyzes its role in addressing AI risks through different strategies, discusses the value of cryptocurrency in it, and explores the mechanism of public goods funding. Finally, it looks forward to the future of technological development, acknowledging the challenges but highlighting the opportunity for humanity to build a better world with existing tools and concepts.

Foreword

A special thanks to volunteers such as Liraz Siri, Janine Leger, and Balvi for their feedback and review.

Approximately one year ago, I wrote an article about technological optimism, where I elaborated on my overall enthusiasm for technology and the significant benefits it can bring. At the same time, I expressed caution about specific issues, mainly focusing on superintelligent AI and the risks of catastrophic outcomes if this technology is mishandled or leads to irreversible power dynamics that might strip humanity of its power.

One core viewpoint in that article was to uphold a concept: decentralized, democratic, and differentiated defensive acceleration. It entailed accelerating technological development while distinctly focusing on technologies that enhance our defense capabilities rather than our destructive capabilities. It also aimed to promote power dispersion rather than concentration in the hands of a few elites, avoiding a scenario where these elites represent everyone to dictate good and evil. The defensive mode should resemble that of democratic Switzerland and historical areas of quasi-anarchism like Samoa, not the model represented by medieval feudalism with lords and castles.

Over the past year, these ideas and concepts have undergone significant development and maturation. I shared these ideas on the "80,000 Hours" platform, dedicated to career choices, and received numerous responses, most of which were positive, along with some criticisms.

The work itself has continued to progress and has yielded tangible results: progress has been witnessed in the field of verifiable open-source vaccines, people's understanding of the value of healthy indoor air has deepened, "community notes" continue to serve a positive purpose, the prediction market has had a breakthrough year as an informational tool, zero-knowledge succinct non-interactive argument of knowledge (ZK-SNARKs) has been applied in government identity and social media fields (ensuring the security of Ethereum wallets through account abstraction), open-source imaging tools have been used in medical and brain-computer interface (BCI) fields, and so on.

In the fall of last year, we welcomed the first major d/acc event: the "d/acc Discovery Day" (d/aDDy) held at Devcon, bringing together speakers from various d/acc pillars (biological, physical, network, information defense, and neural technologies) for a full day event. Over the years, those dedicated to these technologies have increasingly come to understand each other's work, and outsiders have become more aware of the larger vision: that the same values driving Ethereum and cryptocurrency development can extend to a broader world.

Vitalik's New Article: Decentralization Accelerationism One-Year Retrospective and Outlook

The Essence and Extent of d/acc

Fast forward to the year 2042. You see a news report in the media indicating that a new outbreak of a disease is possible in your city. You have become somewhat accustomed to such news: people tend to overreact to each new animal disease mutation, and most of the time, these mutations do not lead to an actual crisis. The two previous potential outbreaks were both detected early through wastewater monitoring and open-source analysis of social media and were successfully contained in their infancy. However, this time is different, with the prediction market showing a 60% probability of at least 10,000 cases, leaving you feeling uneasy.

Just yesterday, the virus's genetic sequence was identified. A software update for your air testing device in your pocket was promptly released, allowing the device to detect the new virus, either through a single breath test or after being exposed to indoor air for 15 minutes. Simultaneously, open-source instructions and code to produce a vaccine using equipment available in any modern medical facility are expected to be released within weeks. Most people have not taken any action yet, relying mainly on widespread adoption of air filtration and ventilation measures to protect themselves.

Due to your own immune issues, you are more cautious: your open-source locally-run personal assistant AI takes into account real-time air testing data and CO2 data in addition to its usual tasks of navigation, restaurant, and activity recommendations, only suggesting the safest places to you. This data, contributed by thousands of participants and devices, is safeguarded using ZK-SNARKs and differential privacy technologies to minimize the risk of data leakage or misuse for other purposes (if you choose to contribute to these datasets, other personal assistant AIs will verify the effectiveness of these encryption tools).

Two months later, the outbreak miraculously disappears: it appears that 60% of people followed basic quarantine protocols, such as wearing masks when the air testing device alerted them to the presence of the virus and isolating themselves at home if their personal test came back positive. This single measure was enough to further reduce the already significantly lowered transmission rate due to passive aggressive air filtration to below 1. A simulation showed a disease potentially five times more severe than the COVID-19 pandemic twenty years prior, yet it did not have a significant impact today.

Devcon's d/acc Day

The d/acc event organized by Devcon has achieved a highly positive outcome, as the d/acc concept successfully brought together people from different fields and effectively sparked their strong interest in each other's work.

Hosting an event with "diversity" is not difficult, but establishing a close connection among people with different backgrounds and interests is extremely challenging. I still vividly remember my experience of being forced to watch lengthy operas during middle and high school, finding them personally dull and uninteresting. I knew I was "supposed to" appreciate them, as not doing so would label me as a culturally ignorant computer science slacker, but I couldn't resonate with the opera content on a deeper level. However, the atmosphere of d/acc Day's event was completely different: it felt like people genuinely enjoyed learning about various types of work from different fields.

If we aspire to build a brighter future than one of dominance, deceleration, and destruction, we will undoubtedly need to engage in this kind of extensive alliance building. d/acc seems to have made significant progress in this regard, and this alone is enough to demonstrate the valuable worth of this concept.

The core idea of d/acc is simple and clear: decentralized, democratic, and diversified defensive acceleration. It involves building technology that tilts the balance of offense and defense towards defense and does not rely on central authority figures to implement. There is an intrinsic connection between these two aspects: any decentralized, democratic, or free political structure often thrives when defense is easy to implement but faces severe challenges when defense is difficult — in those cases, the more likely outcome is a period of chaos where everyone is pitted against each other, eventually reaching a balance of rule by the strongest.

One way to understand the importance of attempting to achieve decentralization, defensiveness, and acceleration simultaneously is to contrast it with the idea of giving up one of these three aspects.

Chart from last year's "My Technological Optimism"

Decentralization Acceleration, but Neglecting the "Diversified Defense" Part

Essentially, this is akin to being an effective accelerationist (e/acc) while also pursuing decentralization. Many have taken this approach, with some identifying as d/acc but beneficially describing their focus as "offense." Additionally, many others show a milder enthusiasm for topics like "decentralized AI," but in my view, they notably lack attention to the "defense" aspect.

In my view, this approach may perhaps avoid the risk of a specific group imposing a dictatorship on humanity as a whole. However, it fails to address a potential structural issue: in an environment conducive to attack, there is always a continuous risk of catastrophe, or someone may position themselves as a protector and permanently entrench their rule. As for artificial intelligence, it also fails to adequately address the risk of humanity as a whole being disempowered relative to artificial intelligence.

Differentiated Defense Accelerates but Ignores "Decentralization and Democracy"

Accepting central control in order to achieve security objectives has always held a certain appeal for some people. Readers are no doubt familiar with many such examples and the downsides they bring. Recently, some have voiced concerns that extreme central control may be the only way to address future extreme technologies. For example, consider a hypothetical scenario where "everyone wears a 'freedom tag'—a follow-up product to today's more limited wearable monitoring devices, similar to ankle tags used in several countries as alternatives to prison... Encrypted video and audio are continuously uploaded and interpreted in real time by machines." However, central control comes with its own set of issues. A relatively mild form of central control that is often overlooked but still detrimental manifests in the biotech field (e.g., food, vaccines) resisting public oversight and allowing this resistance to unchallenged closed-source standards.

The risks of this approach are evident, as the center itself often becomes the source of risk. We witnessed this during the COVID-19 pandemic, where gain-of-function research funded by multiple major world governments may have been the root cause of the outbreak. A centralized epistemology led the World Health Organization to withhold the recognition of airborne transmission of the novel coronavirus for years, while mandatory social distancing and vaccine mandates sparked a potential political backlash that could last for decades. Similar situations are highly likely to occur again in any risk scenario related to artificial intelligence or other risky technologies. In contrast, a decentralized approach would more effectively address risks from the center itself.

Decentralized Defense Rejects Speed but Embraces Acceleration

Essentially, this aims to slow down technological progress or drive economic recession.

This strategy faces a dual challenge. First and foremost, overall, technological and economic growth is extremely beneficial to humanity, and any delay in it comes with immeasurable costs. Second, in a non-authoritarian world, stagnation is unstable: those who "cheat" the most, finding seemingly reasonable ways to continue development, will have an advantage. Decelerationist strategies can work to some extent in certain contexts: for example, European food is healthier than American food, which serves as an example; successful nuclear non-proliferation is another. However, these strategies cannot work indefinitely.

Through d/acc, we are committed to achieving the following goals:

· In the face of today's increasingly tribalistic world, we strive to uphold principles rather than blindly building things for the sake of building—instead, we seek to construct specific things that make the world more secure and better.

· Recognizing that exponential technological progress will make the world profoundly strange and humanity's overall "footprint" in the cosmos will inevitably expand, our ability to protect vulnerable animals, plants, and populations from harm must continue to advance, with the only way forward being to forge ahead boldly.

· Building technology that effectively protects us, not based on the assumption of "good actors (or benevolent AI) controlling everything". We achieve this goal by constructing tools that are naturally more effective for building and protecting than for destroying.

Another way to think about d/acc is to return to a framework from the late 2000s European Pirate Party movement: empowerment.

Our goal is to build a world that preserves human agency, achieving negative liberty by preventing others (be it ordinary citizens, governments, or superintelligent machines) from actively interfering with our ability to shape our own destinies, while also realizing positive liberty by ensuring we have the knowledge and resources to exercise that capacity. This resonates with a classical liberal tradition spanning centuries, encompassing Stuart Brand's focus on "access to tools," John Stuart Mill's emphasis on education alongside liberty as critical to human progress, and perhaps even supplementing Buckminster Fuller's vision for a globally participatory and widely distributed process of problem-solving. Given the technological landscape of the 21st century, we can view d/acc as a pathway to achieving these same goals.

Third Dimension: Coevolution of Survival and Flourishing

In my article last year, d/acc specifically focused on defensive technologies: physical defense, biological defense, network defense, and information defense. However, mere decentralized defense is insufficient to build a great world: we also need a forward-looking positive vision, clarifying what goals humanity can pursue once it attains new levels of decentralization and security.

Last year's article did indeed contain a positive vision in two respects:

1. In addressing the challenge of superintelligences, I proposed a path (not original to me) for how we could achieve superintelligences without losing agency:

· Presently, build artificial intelligences as tools, not highly autonomous intelligences.

· In the future, utilizing tools such as virtual reality, electromyography, and brain-machine interfaces to establish a closer feedback loop between artificial intelligence and humans.

· Over time, gradually moving towards the ultimate culmination, where superintelligence is the product of a tight integration between machines and humans.

2. When discussing information defense, I also mentioned that apart from the defensive social technologies aimed at helping the community stay cohesive when facing attackers and engage in high-quality discussions, there are also progressive social technologies that can help the community make high-quality judgments more easily: Pol.is is an example, and the prediction market is as well.

But at that time, these two points felt disconnected from the core argument of d/acc: "There are some ideas here about building a more democratic, defense-friendly world at a foundational level. Oh, and also, here are some irrelevant ideas about how we achieve superintelligence."

However, I believe that in reality, there are some crucial connections between the so-called "defensive" and "progressive" d/acc technologies. Let's expand on the d/acc chart from last year's article by adding this axis to the chart (and relabeling it as "Survival and Prosperity") to see what results it yields:

There is a consistent pattern across various fields, where the science, ideas, and tools that help us "survive" in a field are closely related to those that empower us to "thrive." Here are some specific examples:

· Many recent COVID-19 studies have focused on the virus's persistent presence in the body, seen as a key mechanism of long COVID. There are also indications that the persistent presence of the virus might be a pathogenic factor in Alzheimer's disease—if this view holds, solving the issue of viral persistence in all tissue types may be key to tackling aging.

· Low-cost and miniature imaging tools, such as those being developed by Openwater, have great potential in addressing issues like microthrombosis, viral persistence, cancer, and can also be applied in the brain-machine interface field.

· The concept of building social tools for highly adversarial environments (like Community Notes) and social tools for rational cooperation environments (like Pol.is) is strikingly similar.

· Prediction markets are valuable in both highly cooperative and highly adversarial environments.

· Zero-knowledge proofs and similar technologies protect privacy while enabling data computation, increasing the amount of data available for beneficial work like scientific research and enhancing privacy protection.

· Solar power and batteries are of extraordinary significance for driving the next wave of clean economic growth, while also demonstrating excellent performance in decentralization and physical resilience.

In addition, there are important interdependencies between different disciplinary areas:

· Brain-machine interfaces are crucial as an information defense and collaboration technology because they enable a more precise communication of our thoughts and intentions. Brain-machine interfaces are not just a connection between robots and consciousness: they can also involve interaction between consciousness, robots, and consciousness. This corresponds to the value of brain-machine interfaces in the concept of diversity.

· Many biotechnologies rely on information sharing, and in many cases, people are only willing to share information when they are confident it will only be used for specific applications. This relies on privacy technologies (such as zero-knowledge proofs, fully homomorphic encryption, obfuscation techniques, etc.).

· Collaboration technologies can be used to coordinate funding for any other technological domain.

Challenge: AI Security, Urgent Timeline, and Regulatory Dilemma

Different people have vastly different timelines for artificial intelligence. Chart from Zuzalu in 2023 at Black Mountain.

Last year, the most persuasive counterargument that my article received came from the AI safety community. Their argument was: "Of course, if we had half a century to develop strong AI, we could focus on building all these beneficial things. But in reality, it seems we may have only three years to develop to general AI, then another three to develop to superintelligence. Therefore, if we don't want the world to fall into ruin or irreversibly difficult situations, we cannot just accelerate the development of beneficial technologies but must also slow down the development of harmful technologies, which requires robust regulatory measures that may antagonize the power structure." In my article last year, besides vaguely calling for not building risky forms of superintelligence, I did not propose any specific strategies to "slow down the development of harmful technologies." So here, it is necessary to directly address this issue: if we are in the most undesirable world, with a very high AI risk, and a timeline that may only be as short as five years, what regulatory measures would I support?

Reasons for Cautious Approach to New Regulation

Last year, the primary proposal for AI regulation was California's SB-1047 bill. SB-1047 required developers of the most powerful models (those with training costs over $100 million or fine-tuning costs over $10 million) to take a series of security testing measures before release. Additionally, developers of AI models would be held accountable if they were not cautious enough. Many critics argued that the bill "poses a threat to open-source"; I disagree with this because the cost threshold means it only affects the most powerful models: even the Llama3 model might fall below that threshold. However, looking back, I believe the bill had a more serious issue: like most regulatory measures, it over-adapted to the current situation. The focus on training costs has been shown to be fragile when facing new technology: the recent state-of-the-art DeepSeek v3 model had a training cost of only $6 million, and in new models like o1, costs are often shifted more to the inference stage.

Most Likely Actor Responsible for Superintelligent AI Doomsday Scenario

In fact, the most likely actor responsible for the superintelligent AI doomsday scenario is the military. As we have witnessed in the past half-century of biosecurity (and earlier), the military is willing to take some terrible actions, and they are highly error-prone. Today, the application of artificial intelligence in the military domain is rapidly advancing (such as in Ukraine, Gaza). Moreover, any government-approved security oversight measures, by default, will exempt the national military and companies closely collaborating with the military.

Response Strategies

Nevertheless, these arguments are not a reason for us to be helpless. Instead, we can use them as guidance to try to formulate rules that minimize these concerns.

Strategy 1: Accountability

If someone's actions in some way cause legally actionable harm, they may be sued. This does not solve the risk problem coming from the military and other "above the law" actors, but it is a very general approach to avoid overfitting; hence, libertarian-leaning economists tend to support this approach.

The major liability targets considered so far are as follows:

· User: the person using the AI.

· Deployer: the intermediary providing the AI service to the user.

· Developer: the person building the AI.

Assigning liability to the user seems most in line with the incentive mechanisms. While the link between how a model is developed and how it is eventually used is often not clear, it is the user who determines the specific application of the AI. Holding users accountable would create a strong pressure for people to use AI in ways that I think are correct: focusing on building a mechanical suit for human thought, not creating new self-sustaining intelligent life forms. The former would regularly respond to user intent and therefore not lead to catastrophic actions unless desired by the user. The latter poses the greatest risk, potentially running out of control and leading to the classic "AI gone rogue" scenario. Another advantage of pushing liability as close to the end user as possible is that it minimizes the risk of liability leading people to take harmful actions in other areas (e.g., closed source, Know Your Customer (KYC) and monitoring, national/corporate collusion to secretly restrict users, such as banks refusing service to certain customers, excluding large parts of the world).

A classic counterargument to assigning liability solely to the user is that users may be ordinary individuals with little money, or even be anonymous, so no one would actually foot the bill for catastrophic harm. This view may be overstated: even if some users are too small to bear responsibility, regular customers of AI developers are not, and thus AI developers are still incentivized to build products that will assure users they are not at a high liability risk. In other words, this is still a valid point that needs to be addressed. You need someone with resources in the incentive pipeline to take appropriate precautions, and deployers and developers are easy targets to find, and they still have a big impact on the model's safety.

The Deployer Responsibility seems reasonable. A common concern is that it does not work with the open-source model, but this appears to be manageable, especially since the most powerful models are likely to be closed source (if the outcome is open-source, then while the Deployer Responsibility may ultimately not be very useful, it also would not cause too much harm). Developer Responsibility also faces the same concerns (although for open-source models, there are some barriers to fine-tune the model to do things it was not originally allowed to do), but the same rebuttal reasons also apply. As a general principle, imposing a kind of "tax" on control essentially says, "You can build something you can't control, or you can build something you can control, but if you build something you can control, then 20% of the control must be used for our purposes," which seems like a reasonable position the legal system should have.

One idea that seems to have not been fully explored is assigning responsibility to other actors in the pipeline who are more likely to have sufficient resources. A very d/acc-aligned idea is to hold responsible the owner or operator of any device overtaken by artificial intelligence in carrying out certain disastrous harmful actions (e.g., through a hack). This would create a very broad incentive for people to strive to make the world's (especially in the fields of computing and biology) infrastructure as secure as possible.

Strategy 2: Global "Soft Pause" Button on Industrial-Scale Hardware

If I were convinced that we need measures stronger than liability rules, I would choose this strategy. The goal is to have the ability to reduce global available computing power by approximately 90% - 99% during critical periods, lasting 1 - 2 years, to buy humanity more preparedness time. The value of 1 - 2 years should not be underestimated: one year of "wartime mode" can easily offset a hundred years of conventional work in complacency. Methods for achieving the "pause" are already being explored, including some specific proposals such as requiring hardware registration and validating location.

A more advanced approach is to use sophisticated cryptographic means: for example, industrially produced (but non-consumer-grade) artificial intelligence hardware could be equipped with a trusted hardware chip that only allows it to continue operating when receiving a 3/3 signature weekly from major international institutions (including at least one non-military affiliate). These signatures will be device-agnostic (if needed, we could even require zero-knowledge proofs to be published on the blockchain), so this will be all or nothing: there is no practical way to authorize one device to continue running without authorizing all other devices.

This seems to "check the boxes" in maximizing benefits and minimizing risks:

· This is a useful capability: if we receive indications that artificial intelligence close to superintelligence is starting to do things that could lead to catastrophically harmful outcomes, we would want to transition more slowly.

· Ahead of such a pivotal moment, merely having the ability for a software pause poses little harm to developers. Focusing on industrial-scale hardware and setting the target at only 90% - 99% will avoid some dystopian practices such as implanting spy chips or enforcing kill switches in consumer laptops, or compelling a small nation to take harsh measures against its will.

· Focus on hardware seems to have a strong adaptability to technological change. We've already seen in multi-generational AI that quality is largely a function of available compute, especially in the early versions of a new paradigm. Therefore, reducing available compute by 10-100x would easily tip the balance for out-of-control superintelligent AI in a rapid battle against humans seeking to contain it.

· The inherent hassle of needing to go online every week for signatures would strongly deter the idea of extending the scheme to consumer hardware.

· Verification through random checks and operation at the hardware level would make exempting specific users difficult (methods based on legal kill switches rather than technical means lack this all-or-nothing property, making them more likely to slide toward exemptions for entities like the military).

Hardware regulation has been strongly considered, although usually within the framework of export controls, fundamentally embodying a "we trust ourselves but not the other side" mindset. Leopold Aschenbrenner famously advocated for the U.S. to gain a decisive advantage and then force China to sign an agreement limiting the number of devices they could operate. This approach seems risky to me and could combine the pitfalls of bipolar competition and centralization. If we must constrain people, it seems a better approach to equally constrain everyone and strive for practical cooperation to organize implementation rather than one side attempting to dominate all.

The Role of d/acc Technology in AI Risk

Both of these strategies (liability and a hardware pause button) have vulnerabilities, and it's clear they are mere stopgap measures: if something can be done on a supercomputer at time T, it's quite likely it can be done on a laptop at time T + 5 years. Therefore, we need more robust measures to buy time. Many d/acc technologies are relevant here. We can see the role of d/acc technology as follows: If AI were to take over the world, how would it do it?

· It invades our computers → Network Defense

· It creates a super plague → Biological Defense

· It persuades us (either to trust it or not to trust each other) → Information Defense

As briefly mentioned above, liability rules are a regulatory method that naturally fits the d/acc concept, as they can very effectively incentivize the adoption of these defenses worldwide and take them seriously. Taiwan has recently been experimenting with holding accountable for false advertising, which can be seen as an example of using liability to encourage information defense. We shouldn't be too eager to impose liability everywhere and remember the benefits of ordinary freedoms in enabling small players to participate in innovation without fear of litigation, but where we do want to push security more forcefully, liability can be quite nimble and effective.

The Role of Cryptocurrency in d/acc

Many aspects of d/acc go far beyond typical blockchain topics: biosafety, brain-machine interfaces, and collaborative discourse tools seem far removed from what cryptocurrency folks usually talk about. However, I see some important connections between cryptocurrency and d/acc, specifically:

· d/acc is the fundamental values of cryptocurrency (decentralization, censorship resistance, open global economy and society) extended into other technical areas.

· Because cryptocurrency users are natural early adopters, and there is a consistency of values, the cryptocurrency community is a natural early user of d/acc tech. The high value placed on community (both online and offline, e.g., events and flash mobs) and the fact that these communities actually do risky stuff rather than just talking about it make the cryptocurrency community a particularly attractive incubator and testbed for d/acc tech, which is fundamentally group-operated rather than individually (e.g., much of infosec and biodefense tech). Cryptocurrency folks just do things together.

· Many cryptocurrency technologies can be applied to d/acc-themed areas: blockchain for building more robust and decentralized financial, governance, and social media infrastructure, zero-knowledge proofs for privacy preservation, etc. Today, many of the largest prediction markets are built on blockchains, and they are becoming increasingly more complex, decentralized, and democratic.

· There are win-win collaboration opportunities in adjacent technologies to cryptocurrency that are immensely valuable to cryptocurrency projects but are also key to achieving d/acc goals: formal verification, computer software and hardware security, and adversarially robust governance tech. These make Ethereum blockchains, wallets, and decentralized autonomous organizations (DAOs) more secure and resilient, and they also achieve critical civilization defense objectives, such as reducing our vulnerability to network attacks (including potentially from superintelligent AI).

Cursive is an application that uses Fully Homomorphic Encryption (FHE) and allows users to identify areas of shared interest with others while preserving privacy. Chiang Mai's Edge City (one of Zuzalu's many branches) uses this application.

d/acc and Public Goods Funding

One issue I've long been interested in is coming up with better mechanisms to fund public goods: projects that are valuable to very large groups but lack a natural commercialization model. My past work in this space has included my contributions to Quadratic Funding and its applications in Gitcoin Grants, retroactive public goods funding, and more recently, deep funding.

Many people are skeptical of the concept of public goods. This skepticism usually stems from two main aspects:

· Public goods have historically been used as a reason for government to engage in heavy-handed central planning and intervention in society and the economy.

· A prevalent view is that public goods funding lacks rigor and operates based on social expectation bias—meaning it sounds good rather than actually being good—and benefits insiders who can play the social game.

These are all important and valid criticisms. However, I believe robust decentralized public goods funding is crucial to the d/acc vision, as a key goal of d/acc (minimizing central points of control) itself hinders many traditional business models. Building successful businesses on open-source is possible—several Balvi grantees are doing so—but in some cases, it is challenging enough that critical projects need additional ongoing support. Thus, we must do the hard work of figuring out how to fund public goods in a way that addresses the above two criticisms.

The solution to the first issue is essentially trust neutrality and decentralization. Central planning is problematic because it hands control to elites who may abuse their power and because it often overfits to the current situation, becoming increasingly ineffective over time. Quadratic funding and similar mechanisms are all about funding public goods in as trustworthy, neutral, and (architecturally and politically) decentralized a way as possible.

The second issue is more challenging. For quadratic funding, a common criticism is that it quickly turns into a popularity contest, requiring project funders to spend a significant amount of effort on public promotion. Additionally, projects that are "in people's faces" (e.g., end-user applications) receive funding while those more behind-the-scenes projects (typical "dependency maintained by one person in Nebraska") receive no funding at all. Optimistic retroactive funding relies on fewer expert badge holders; here, the popularity contest effect is reduced, but the social effect of having a close personal relationship with badge holders is amplified.

Deep funding is my most recent effort to address this problem. Deep funding has two main innovations:

· Dependency graph. Instead of asking each juror a global question ("What is the value of Project A to humanity?"), we ask a local question ("Is Project A or Project B more valuable to Outcome C? How much value?"). Humans are notoriously bad at answering global questions: in a famous study, when asked how much they would be willing to spend to save N birds, respondents gave roughly $80 for N = 2000, N = 20000, and N = 200000. Local questions are easier to handle. Then, we combine local answers into a global answer by maintaining a "dependency graph": for each project, which other projects contribute to its success and how much?

· Artificial Intelligence as distilled human judgment. Each juror is assigned only a small random sample of all questions. There is an open competition where anyone can submit an AI model attempting to efficiently fill in all the edges in the graph. The final answer is the weighted sum of models most compatible with the jury's answer. For code examples, see here. This approach allows the mechanism to scale to very large sizes while requiring the jury to submit only a small number of "bits of information." This reduces the opportunity for corruption and ensures each bit of information is of high quality: jurors can spend a long time thinking about each question rather than quickly clicking through hundreds of questions. By employing AI in an open competition, we reduce biases from any single AI training and management process. The open market of AI serves as the engine with humans as the steering wheel.

But deep-pocket funding is just the latest example; there have been prior ideas for other public goods funding mechanisms, and there will be more in the future. allo.expert has done a good job in cataloging them. The fundamental goal is to create a social tool capable of funding public goods at a level of accuracy, fairness, and open access close to market-funded private goods. It doesn't need to be perfect; after all, the market itself is far from perfect. But it should be effective enough to enable developers working on high-quality open-source projects beneficial to everyone to continue doing so without feeling they need to make unacceptable compromises.

Today, most leading projects in d/acc domain areas like vaccines, brain-machine interfaces, "peripheral brain-machine interfaces" such as wrist muscle EMG and eye-tracking, anti-aging drugs, hardware, etc., are proprietary projects. This has significant downsides in ensuring public trust, as we've seen in many of these areas mentioned above. It also shifts focus to competitive dynamics ("our team must win in this critical industry!") rather than ensuring these technologies emerge quickly enough to safeguard us in a world of superintelligent AI. For these reasons, robust public goods funding can be a powerful driver of openness and freedom. This is another way the cryptocurrency community can help d/acc: by diligently exploring these funding mechanisms and making them work well in its own context, readying them for broader application in open-source science and technology.

Future

The coming decades bring crucial challenges. Two challenges have been on my mind recently:

· A powerful new wave of technology, especially strong AI, is rapidly approaching, accompanied by significant pitfalls we need to avoid. "Artificial superintelligence" might arrive in five years or it might take fifty. Regardless, whether the default outcome is automatically positive is unclear, as described in this article and the previous one, there are several pitfalls to avoid.

· The world is becoming increasingly uncooperative. Many powerful actors who previously seemed to at least sometimes act based on noble principles (worldliness, freedom, common humanity, etc.) are now more openly and actively pursuing the self-interest of the individual or tribe.

However, each of these challenges has a silver lining. First, we now have very powerful tools to expedite our remaining work:

· Current and near-future artificial intelligence can be used to build other technologies and can be a factor in governance (as in deep-pocket funding or informational finance). It is also very relevant to brain-computer interfaces, which themselves can provide further productivity gains.

· Large-scale coordination is now more possible than before on a larger scale. The internet and social media have expanded the scope of coordination, global finance (including cryptocurrency) has amplified its power, and now information defense and collaboration tools can enhance its quality, perhaps soon brain-computer interfaces in human-computer-human form can deepen it.

· Formal verification, sandbox technologies (web browsers, Docker, Qubes, GrapheneOS, etc.), secure hardware modules, and other technologies are improving to make better network security possible.

· Writing any kind of software is much easier than it was two years ago.

· Recent basic research on understanding how viruses work, especially a simple understanding of the most important form of transmission, airborne transmission, has shown a clearer path for how to improve biodefense.

· The latest advancements in biotechnology (e.g., CRISPR, advances in biological imaging) make various biotechnologies, whether used for defense, longevity, super happiness, exploring multiple new biological hypotheses, or just doing very cool things, more accessible.

· The joint progress of computing and biotechnology is making synthetic biology tools possible, which you can use to adapt, monitor, and enhance your health. Network defense technologies, such as cryptography, make this personalized dimension more feasible.

Second, many of the principles we cherish are no longer monopolized by a specific subset of old powers; they can be reclaimed by a broad alliance welcoming anyone in the world. This may be the greatest benefit of recent political "realignment" worldwide, an opportunity worth seizing. Cryptocurrency has brilliantly leveraged this and found global appeal; d/acc can do the same.

Having tools means we can adapt and enhance our biological traits and environment, and the "defense" part in d/acc means we can do so without infringing on others' freedom to do the same. The principles of liberal pluralism mean we can have great diversity in how we achieve this, and our commitment to common human goals means it should be achievable.

We humans are still the brightest stars. The task before us, to build a brighter 21st century and protect the existence, freedom, and agency of humanity as we journey to the stars, is a challenging one. But I believe we are up to the task.

Original Article Link

You may also like

a16z Leads $18M Seed Round for Catena Labs, Crypto Industry Bets on Stablecoin AI Payment

Traditional finance is still stuck in a "human-to-human" model, while Catena aims to achieve "AI-to-AI" interaction.

Pharos, deeply integrated with AntChain, is about to launch. How can we get involved?

What is the relationship between the $8 million funded NewChain and Ant, and how will they interact?

$COIN Joins S&P 500, but Coinbase Isn't Celebrating

On May 13, S&P Dow Jones Indices announced that Coinbase would officially replace Discover Financial Services in the S&P 500 on May 19. While other companies like Block and MicroStrategy, closely tied to Bitcoin, were already part of the S&P 500, Coinbase became the first cryptocurrency exchange whose primary business is in the index. This also signifies that cryptocurrency is gradually moving from the fringes to the mainstream in the U.S.



On the day of the announcement, Coinbase's stock price surged by 23%, surpassing the $250 mark. However, just 3 days later, Coinbase was hit by two consecutive events: a hack where employees were bribed to steal customer data and a demand for a $20 million ransom, and an investigation by the U.S. Securities and Exchange Commission (SEC) into the authenticity of its claim of having over 100 million "verified users" in its securities filings and marketing materials. These two events acted as mini-bombs, and at the time of writing, Coinbase's stock had already dropped by over 7.3%.


Coincidentally, Discover Financial Services, being replaced by Coinbase, can also be considered the "Coinbase" of the previous payment era. Discover is a U.S.-based digital banking and payment services company headquartered in Illinois, founded in 1960. Its payment network, Discover Network, is the fourth largest payment network apart from Visa, Mastercard, and American Express.


In April, after the approval of the acquisition of Discover by the sixth-largest U.S. bank, Capital One, this well-established digital banking company of over 60 years smoothly handed over its S&P 500 "seat" to this emerging cryptocurrency "bank." This unexpected coincidence also portrayed the handover between the new and old eras in Coinbase's entry into the S&P 500, resembling a relay race scene. However, this relay baton also brought Coinbase's accumulated "external troubles and internal strife" to a tipping point.


Side Effects of ETFs


Over the past decade, cryptocurrency exchanges have been the most stable "profit machines." They play a role in providing liquidity to the entire industry and rely on trading fees to sustain their operations. However, with the comprehensive rollout of ETF products in the U.S. market, this profit model is facing unprecedented challenges. As the leader in the "American stack," with over 80% of its business coming from the U.S., Coinbase is most affected by this.



Starting from the approval of Bitcoin and Ethereum spot ETFs, traditional financial capital has significantly onboarded users and funds that originally belonged to exchanges in a more cost-effective, compliant, and transparent manner. The transaction fee revenue of cryptocurrency exchanges has started to decline, and this trend may further intensify in the coming months.


According to Coinbase's 2024 Q4 financial report, the platform's total trading revenue was $417 million, a 45% year-on-year decrease. The contribution of BTC and ETH's trading revenue dropped from 65% in the same period last year to less than 50%.


This decline is not a result of a decrease in market enthusiasm. In fact, since the approval of the Bitcoin ETF in January 2024, the inflow of BTC into the U.S. market has continued to reach new highs, with asset management giants like BlackRock and Fidelity rapidly expanding their management scale. Data shows that BlackRock's iShares Bitcoin ETF (IBIT) alone has surpassed $17 billion in assets under management. As of mid-May 2025, the cumulative net inflow of 11 major institutional Bitcoin spot ETFs on the market has exceeded $41.5 billion, with a total net asset value of $1214.69 billion, accounting for approximately 5.91% of the total Bitcoin market capitalization.


Chart showing the trend of net outflows for Grayscale among the 11 institutions


Institutional investors and some retail investors are shifting towards ETF products, partly due to compliance and tax considerations. On one hand, ETFs have much lower trading costs compared to cryptocurrency exchanges. While Coinbase's spot trading fee rate varies annually in a tiered manner but averages around 1.49%, for example, the management fee for IBIT ETF is only 0.25%, and the majority of ETF institution fees fluctuate around 0.15% to 0.25%.



In other words, the more rational users are, the more likely they are to move from exchanges to ETF products, especially for investors aiming for long-term holdings.


According to multiple sources, several institutions, including VanEck and Grayscale, have submitted applications to the SEC for a Solana (SOL) ETF, with some institutions also planning to submit an XRP ETF proposal. Once approved, this may trigger a new round of fund migration. According to a report submitted by Coinbase to the SEC, as of April, the platform's trading revenue from XRP and Solana accounted for 18% and 10%, nearly one-third of the platform's fee revenue.



However, the Bitcoin and Ethereum ETFs passed in 2024 also reduced the fees for these two tokens on Coinbase from 30% and 15% to 26% and 10%, respectively. If the SOL and XRP ETFs are approved, it will further undermine the core fee revenue of exchanges like Coinbase.


The expansion of ETF products is gradually weakening the financial intermediary status of cryptocurrency exchanges. From their original roles as matchmakers and clearers to now gradually becoming mere "on-ramps and off-ramps" for funds, exchanges are seeing their marginal value squeezed by ETFs.


Robinhood Takes a Stand, Traditional Brokerages Join the Fray


On May 12, 2025, SEC Chairman Paul S. Atkins gave a keynote speech at the Tokenization and Cryptocurrency Working Group roundtable. The theme of his speech revolved around "It is a new day at the SEC," where he indicated that the SEC would not approach enforcement and regulation the same way as before but would instead pave the way for cryptocurrency assets in the U.S. market.



With signs of cryptocurrency compliance such as the SEC's "NEW DAY" declaration, an increasing number of traditional brokerages are attempting to enter the cryptocurrency industry. One of the most representative cases is the well-known U.S. brokerage Robinhood, which began expanding its crypto business in 2018. By the time of its IPO in 2021, Robinhood's crypto business revenue accounted for over 50% of the company, with a significant boost from the Dogecoin "moonshot" promoted by Musk.


In Q1 2025 earnings report, Robinhood showcased strong growth, especially in revenue from cryptocurrency and options trading. Fueled by Trump's Memecoin, cryptocurrency-related revenue reached $250 million, nearly doubling year-over-year. Consequently, Robinhood Gold subscription users reached 3.5 million, a 90% increase from the previous year, with the rapid growth of Robinhood Gold providing the company with a stable source of income.



Meanwhile, RobinHood is actively pursuing acquisitions in the cryptocurrency space. In 2024, it announced a $2 billion acquisition of the long-standing European cryptocurrency exchange Bitstamp. Additionally, Canada's largest cryptocurrency CEX, WonderFi, which recently went public on the Toronto Stock Exchange, also announced its integration with RobinHood Crypto. After obtaining virtual asset licenses in the UK, Canada, Singapore, and other markets, RobinHood has taken a proactive approach in the compliant cryptocurrency trading market.



Furthermore, an increasing number of brokerage firms are exploring the same path. Futu Securities, Tiger Brokers, and others are also dipping their toes into cryptocurrency trading, with some having applied for or obtained the VA license from the Hong Kong SFC. Although their user bases are currently small, traditional brokerages have a natural advantage in user trust, regulatory licenses, and low fee structures. This could pose a threat to native cryptocurrency platforms in the future.



User Data Breach: Is Coinbase Still Secure?


In April 2025, security researchers discovered that some Coinbase user data was leaked on the dark web. While the platform initially responded by attributing it to a "technical misinformation," it still raised concerns among users regarding its security and privacy protection. Just two days before Dow Jones Indexes announced Coinbase's addition to the S&P 500 Index, on May 11, 2025, Coinbase received an email from an unknown threat actor claiming to have obtained customer account information and internal documents, demanding a $20 million ransom to keep the data private. Subsequent investigations confirmed the data breach.


Cybercriminals obtained the data by bribing overseas customer service agents and support staff, mainly in "non-U.S. regions such as India." These agents abused their access to Coinbase's internal customer support system and stole customer data. As early as February this year, blockchain detective ZachXBT revealed on X platform that between December 2024 and January 2025, Coinbase users lost over $65 million to social engineering scams, with the actual amount potentially higher.


Among the victims was a well-known figure, 67-year-old Ed Suman, an established artist in the art world for nearly two decades, having been involved in the creation of artworks such as Jeff Koons' "Balloon Dog" sculpture. Earlier this year, he fell victim to an impersonation scam involving fake Coinbase customer support, resulting in a loss of over $2 million in cryptocurrency. ZachXBT critiqued Coinbase for its inadequate handling of such scams, noting that other major exchanges have not faced similar issues and recommending Coinbase to enhance its security measures.


Amidst a series of ongoing social engineering incidents, although there has not been any impact on user assets at the technical level so far, it has raised concerns among many retail and institutional investors. Especially institutions holding massive assets on Coinbase. Just considering the U.S. BTC ETF institutions, as of mid-May 2025, they collectively hold nearly 840,000 BTC, and 75% of these are custodied by Coinbase. If we price BTC at $100,000, this amount reaches a staggering $63 billion, which is equivalent to the nominal GDP of two Iceland in the year 2024.


Visualization: ChatGPT, Source: Farside


In addition, Coinbase Custody also serves over 300 institutional clients, including hedge funds, family offices, pension funds, and endowments. As of the Q1 2025 financial report, Coinbase's total assets under management (including institutional and retail clients) reached $404 billion. The specific amount of institutional custodied assets was not explicitly disclosed in the latest report, but it should still be over 50% based on the Q4 2024 report.


Visualization: ChatGPT


Once this security barrier is breached, not only could the rate of user attrition far exceed expectations, but more importantly, institutional trust in it would undermine the foundation of its business. Therefore, after a hacking event, Coinbase's stock price plummeted significantly.


CEXs are All in Self-Rescue Mode


Facing a decline in spot trading fee revenue, Coinbase is also accelerating its transformation, attempting to find growth opportunities in derivatives and emerging assets. Coinbase acquired a stake in the options platform Deribit at the end of 2024 and announced the official launch of perpetual contract products in 2025. This acquisition fills in Coinbase's gap in options trading and its relatively small global market share.



Deribit has a strong presence in non-U.S. markets, especially in Asia and Europe. The acquisition has enabled Coinbase to gain a dominant position in bitcoin and ethereum options trading on Deribit, accounting for approximately 80% of the global options trading volume, with daily trading volume remaining above $2 billion.


Meanwhile, 80-90% of Deribit's customer base consists of institutional investors, with their professionalism and liquidity in the Bitcoin and Ethereum options market highly favored by institutions. Coinbase's compliance advantage, coupled with its already robust institutional ecosystem, makes it even more suitable. By using institutions as an entry point, it can face the squeeze from giants like Binance and OKX in the derivatives market.



Facing a similar dilemma is Kraken, which is attempting to replicate Binance Futures' model in non-U.S. markets. Since the derivatives market relies more on professional users, fee rates are relatively higher and stickiness is stronger, making it a significant source of revenue for exchanges. In the first half of 2025, Kraken completed the acquisition of TradeStation Crypto and a futures exchange, aiming to build a complete derivatives trading ecosystem to hedge the risk of declining spot transaction fee income.


With the surge of Memecoin in 2024, Binance, OKX, and various CEX platforms began massively listing small-market-cap, highly volatile tokens to activate active trading users. Due to the wealth effect and trading activity of Memecoins, Coinbase was also forced to join the battle, successively listing popular tokens from the Solana ecosystem such as BOOK OF MEME and Dogwifhat. Although these coins are controversial, they are frequently traded, with fee rates several times higher than mainstream coins, serving as a "blood-boosting" method for spot trading.


However, due to its status as a publicly traded company, this practice is a riskier endeavor for Coinbase. Even in the current crypto-friendly environment, the SEC is still investigating whether tokens like SOL, ADA, and SAND constitute securities.


In addition to the forced transformation strategies carried out by the aforementioned CEXs, they are also starting to lay out RWAs and the most talked-about stablecoin payment fields, such as the PYUSD launched through a collaboration between Coinbase and Paypal, Coinbase's support for the Euro stablecoin EURC by Circle that complies with EU MiCA regulatory requirements, or the USD1 launched through a collaboration between Binance and WIFL. In the increasingly crowded trading field, many CEXs have shifted their focus from just the trading market to the application field.


The golden age of transaction fees has quietly ended, and the second half of the crypto exchange platform game has silently begun.


Arthur Hayes: Why I'm Betting on ETH While the Market Is Obsessed with SOL

"I personally have also allocated 20% to gold, expecting the price of gold to potentially rise to $10,000-20,000 by the end of this market cycle."

Key Market Insights for May 16th, how much did you miss out on?

1. On-chain Flows: $111.3M inflow to Ethereum this week; $237.6M outflow from Berachain 2. Largest Price Swings: $ETHFI, $NEIRO 3. Top News: Data: Solana Network's revenue reached $7.9M on the 13th, surpassing the sum of all other L1 and L2 chains

CryptoPunks Changes Hands Twice, Did the Originator of NFTs Finally Find Its "Forever Home" This Time?

The original NFT pioneer CryptoPunks has once again officially changed ownership after being sold to the Bored Ape Yacht Club (BAYC) developer Yuga Labs.

Popular coins

Latest Crypto News

Read more