• CRYPTO-GRAM, July 15, 2024

    From Sean Rima@21:1/229 to All on Mon Jul 15 13:22:30 2024

    Crypto-Gram
    July 15, 2024

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School
    schneier@schneier.com
    https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************
    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    Using LLMs to Exploit Vulnerabilities
    Rethinking Democracy for the Age of AI
    The Hacking of Culture and the Creation of Socio-Technical Debt
    New Blog Moderation Policy
    Recovering Public Keys from Signatures
    Ross Anderson’s Memorial Service
    Paul Nakasone Joins OpenAI’s Board of Directors
    Breaking the M-209
    The US Is Banning Kaspersky
    Security Analysis of the EU’s Digital Wallet
    James Bamford on Section 702 Extension
    Model Extraction from Neural Networks
    Public Surveillance of Bars
    Upcoming Book on AI and Democracy
    New Open SSH Vulnerability
    On the CSRB’s Non-Investigation of the SolarWinds Attack
    Reverse-Engineering Ticketmaster’s Barcode System
    RADIUS Vulnerability
    Apple Is Alerting iPhone Users of Spyware Attacks
    The NSA Has a Long-Lost Lecture by Adm. Grace Hopper
    Upcoming Speaking Engagements

    ** *** ***** ******* *********** *************
    Using LLMs to Exploit Vulnerabilities

    [2024.06.17] Interesting research: “Teams of LLM Agents can Exploit Zero-Day Vulnerabilities.”

    Abstract: LLM agents have become increasingly sophisticated, especially in the realm of cybersecurity. Researchers have shown that LLM agents can exploit real-world vulnerabilities when given a description of the vulnerability and toy capture-the-flag problems. However, these agents still perform poorly on real-world vulnerabilities that are unknown to the agent ahead of time (zero-day vulnerabilities).

    In this work, we show that teams of LLM agents can exploit real-world, zero-day vulnerabilities. Prior agents struggle with exploring many different vulnerabilities and long-range planning when used alone. To resolve this, we introduce HPTSA, a system of agents with a planning agent that can launch subagents. The planning agent explores the system and determines which subagents to call, resolving long-term planning issues when trying different vulnerabilities. We construct a benchmark of 15 real-world vulnerabilities and show that our team of agents improve over prior work by up to 4.5×.

    The LLMs aren’t finding new vulnerabilities. They’re exploiting zero-days -- which means they are not trained on them -- in new ways. So think about this sort of thing combined with another AI that finds new vulnerabilities in code.

    These kinds of developments are important to follow, as they are part of the puzzle of a fully autonomous AI cyberattack agent. I talk about this sort of thing more here.

    ** *** ***** ******* *********** *************
    Rethinking Democracy for the Age of AI

    [2024.06.18] There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

    At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

    We need to create new systems of governance that align incentives and are resilient against hacking -- at every scale. From the individual all the way up to the whole of society.

    For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

    Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

    We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion.

    And finally, this is a work in progress. I’m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you’re going to hear is the current draft of my writing -- and my thinking. So everything is subject to change without notice.

    OK, so let’s go.

    We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And technologies for honing persuasive messages.

    The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don’t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.

    Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That’s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.

    James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology -- social media -- breaks them both.

    So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.

    That’s the real issue. It’s not about misinformation, it’s about the incentive structure that makes misinformation a viable strategy.

    This is problem No. 1: Our systems have misaligned incentives. What’s best for the small group often doesn’t match what’s best for the whole. And this is true across all sorts of individuals and group sizes.

    Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.

    But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that’s how you impose your own incentive structure on the whole.

    More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that’s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.

    We have accepted these costs of capitalism -- and democracy -- because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today’s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.

    Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product’s success depends on my competitors’ failures. We think zero-sum internationally. Arms races and trade wars.

    Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone’s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems -- complex, wicked, global problems -- that don’t work that way. We have interacting groups of problems that don’t work that way. We have problems that require more efficient ways of finding optimal solutions.

    Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.

    The latter is problem No. 2: What I refer to as “hacking” in my latest book: “A Hacker’s Mind.” Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated -- they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.

    In today’s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don’t get patched.

    This is bad for society, but it’s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn’t a new problem, but today hacking scales better -- and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.

    Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

    This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They’re designed to be reactive, because traditionally there was only so much damage a single person could do.

    We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.

    The general term for this kind of thing is the “precautionary principle.” When random new things can be deadly, we prohibit them unless they are specifically allowed.

    So what happens when a significant percentage of our jobs are as potentially damaging as a pilot’s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.

    And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating “micro-legislation” that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.

    So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.

    Or, to put more words on it: Misaligned incentives encourage local optimization, and that’s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.

    OK, let’s get back to the notion of democracy as an information system. It’s not just democracy: Any form of governance is an information system. It’s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.

    Historically, there are many ways to do this. We can have a system where no one’s preference matters except the monarch’s or the nobles’ or the landowners’. Sometimes the stronger army gets to decide -- or the people with the money.

    Or we could tally up everyone’s preferences and do the thing that at least half of the people want. That’s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins -- and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.

    I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I’m focusing on how we implement it.

    Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it’s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.

    To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.

    But democracy is not a static system, it’s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.

    Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone’s preferences.

    Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.

    Why can’t we have a game where everybody wins?

    This has never been done before. It’s not capitalism, it’s not communism, it’s not socialism. It’s not current democracies or autocracies. It would be unlike anything we’ve ever seen.

    Some of this comes down to how trust and cooperation work. When I wrote “Liars and Outliers” in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

    What I didn’t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They’re person to person, based on human connection and cooperation. Laws -- and especially security technologies -- are newer systems of trust that force us to cooperate. They’re socio-technical systems. They’re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country’s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don’t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.

    In today’s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It’s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.

    But here’s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.

    Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that’s a system that scales to large groups of strangers.

    Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It’s how families function: people work according to their abilities, and resources are distributed according to their needs.

    I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don’t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.

    This runs headlong into historical norms about sovereignty. But that’s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the “drug war” -- or climate migration -- are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.

    We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it’s OK that we waste some of the pie in order for it to grow. That doesn’t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we’re in right now.

    Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.

    In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn’t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.

    Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.

    This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.” This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking -- and also red teaming. In this context, both journalism and civil society are important checks on government.

    I want to quickly mention two ideas for democracy, one old and one new. I’m not advocating for either. I’m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.

    Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they’re not immune to hacking, we can design them from the beginning with security in mind.

    This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.

    We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.

    Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?

    Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren’t swayed by misinformation. It’s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.

    But slow down: This is rapidly devolving into technological solutionism. And we know that doesn’t work.

    A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it’s easy. I’m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I’m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can’t explain why, but strongly urges us to declare war -- right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.

    And learned helplessness is an important consideration. We’re probably OK with no longer needing to know how to drive a car. But we don’t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.

    So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.

    Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in -- and in charge of -- the process of governance.

    And that value is critical to making democracy function. Democratic knowledge isn’t something that’s out there to be gathered: It’s dynamic; it gets produced through the social processes of democracy. The term of art is “preference formation.” We’re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.

    We’re never going to remove conflict and competition from our political and economic systems. Human disagreement isn’t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it’s the process of making all of this work that makes democracy possible.

    So we actually can’t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

    There’s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don’t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven’t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven’t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin’s trilemma also matters here: that you can’t simultaneously build systems that are secure, distributed, and scalable.

    I also haven’t given a moment’s thought to how to get from here to there. Everything I’ve talked about -- incentives, hacking, power, complexity -- also applies to any transition systems. But I think we need to have unconstrained discussions about what we’re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they’re not far-off science fiction.

    I know we can’t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.

    To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes. At the same time we’re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today’s problems and today’s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.

    This feels like a challenge worthy of our security expertise.

    This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. It was previously published in Cyberscoop. I thought I posted it to my blog and Crypto-Gram last year, but it seems that I didn’t.

    ** *** ***** ******* *********** *************
    The Hacking of Culture and the Creation of Socio-Technical Debt

    [2024.06.19] Culture is increasingly mediated through algorithms. These algorithms have splintered the organization of culture, a result of states and tech companies vying for influence over mass audiences. One byproduct of this splintering is a shift from imperfect but broad cultural narratives to a proliferation of niche groups, who are defined by ideology or aesthetics instead of nationality or geography. This change reflects a material shift in the relationship between collective identity and power, and illustrates how states no longer have exclusive domain over either. Today, both power and culture are increasingly corporate.

    Blending Stewart Brand and Jean-Jacques Rousseau, McKenzie Wark writes in A Hacker Manifesto that “information wants to be free but is everywhere in chains.”1 Sounding simultaneously harmless and revolutionary, Wark’s assertion as part of her analysis of the role of what she terms “the hacker class” in creating new world orders points to one of the main ideas that became foundational to the reorganization of power in the era of the internet: that “information wants to be free.” This credo, itself a co-option of Brand’s influential original assertion in a conversation with Apple cofounder Steve Wozniak at the 1984 Hackers Conference and later in his 1987 book The Media Lab: Inventing the Future at MIT, became a central ethos for early internet inventors, activists,2 and entrepreneurs. Ultimately, this notion was foundational in the construction of the era we find ourselves in today: an era in which internet companies dominate public and private life. These companies used the supposed desire of information to be free as a pretext for building platforms that allowed people to connect and share content. Over time, this development helped facilitate the definitive power transfer of our time, from states to corporations.

    This power transfer was enabled in part by personal data and its potential power to influence people’s behavior -- a critical goal in both politics and business. The pioneers of the digital advertising industry claimed that the more data they had about people, the more they could influence their behavior. In this way, they used data as a proxy for influence, and built the business case for mass digital surveillance. The big idea was that data can accurately model, predict, and influence the behavior of everyone -- from consumers to voters to criminals. In reality, the relationship between data and influence is fuzzier, since influence is hard to measure or quantify. But the idea of data as a proxy for influence is appealing precisely because data is quantifiable, whereas influence is vague. The business model of Google Ads, Facebook, Experian, and similar companies works because data is cheap to gather, and the effectiveness of the resulting influence is difficult to measure. The credo was “Build the platform, harvest the data...then profit.” By 2006, a major policy paper could ask, “Is Data the New Oil?”3

    The digital platforms that have succeeded most in attracting and sustaining mass attention -- Facebook, TikTok, Instagram -- have become cultural. The design of these platforms dictates the circulation of customs, symbols, stories, values, and norms that bind people together in protocols of shared identity. Culture, as articulated through human systems such as art and media, is a kind of social infrastructure. Put differently, culture is the operating system of society.

    Like any well-designed operating system, culture is invisible to most people most of the time. Hidden in plain sight, we make use of it constantly without realizing it. As an operating system, culture forms the base infrastructure layer of societal interaction, facilitating communication, cooperation, and interrelations. Always evolving, culture is elastic: we build on it, remix it, and even break it.

    Culture can also be hacked -- subverted for specific advantage.4 If culture is like an operating system, then to hack it is to exploit the design of that system to gain unauthorized control and manipulate it towards a specific end. This can be for good or for bad. The morality of the hack depends on the intent and actions of the hacker.

    When businesses hack culture to gather data, they are not necessarily destroying or burning down social fabrics and cultural infrastructure. Rather, they reroute the way information and value circulate, for the benefit of their shareholders. This isn’t new. There have been culture hacks before. For example, by lending it covert support, the CIA hacked the abstract expressionism movement to promote the idea that capitalism was friendly to high culture.5 Advertising appropriated the folk-cultural images of Santa Claus and the American cowboy to sell Coca-Cola and Marlboro cigarettes, respectively. In Mexico, after the revolution of 1910, the ruling party hacked muralist works, aiming to construct a unifying national narrative.

    Culture hacks under digital capitalism are different. Whereas traditional propaganda goes in one direction -- from government to population, or from corporation to customers -- the internet-surveillance business works in two directions: extracting data while pushing engaging content. The extracted data is used to determine what content a user would find most engaging, and that engagement is used to extract more data, and so on. The goal is to keep as many users as possible on platforms for as long as possible, in order to sell access to those users to advertisers. Another difference between traditional propaganda and digital platforms is that the former aims to craft messages with broad appeal, while the latter hyper-personalizes content for individual users.

    The rise of Chinese-owned TikTok has triggered heated debate in the US about the potential for a foreign-owned platform to influence users by manipulating what they see. Never mind that US corporations have used similar tactics for years. While the political commitments of platform owners are indeed consequential -- Chinese-owned companies are in service to the Chinese Communist Party, while US-owned companies are in service to business goals -- the far more pressing issue is that both have virtually unchecked surveillance power. They are both reshaping societies by hacking culture to extract data and serve content. Funny memes, shocking news, and aspirational images all function similarly: they provide companies with unprecedented access to societies’ collective dreams and fears.6 By determining who sees what when and where, platform owners influence how societies articulate their understanding of themselves.

    Tech companies want us to believe that algorithmically determined content is effectively neutral: that it merely reflects the user’s behavior and tastes back at them. In 2021, Instagram head Adam Mosseri wrote a post on the company’s blog entitled “Shedding More Light on How Instagram Works.” A similar window into TikTok’s functioning was provided by journalist Ben Smith in his article “How TikTok Reads Your Mind.”7 Both pieces boil down to roughly the same idea: “We use complicated math to give you more of what your behavior shows us you really like.”

    This has two consequences. First, companies that control what users see in a nontransparent way influence how we perceive the world. They can even shape our personal relationships. Second, by optimizing algorithms for individual attention, a sense of culture as common ground is lost. Rather than binding people through shared narratives, digital platforms fracture common cultural norms into self-reinforcing filter bubbles.8

    This fragmentation of shared cultural identity reflects how the data surveillance business is rewriting both the established order of global power, and social contracts between national governments and their citizens. Before the internet, in the era of the modern state, imperfect but broad narratives shaped distinct cultural identities; “Mexican culture” was different from “French culture,” and so on. These narratives were designed to carve away an “us” from “them,” in a way that served government aims. Culture has long been understood to operate within the envelope of nationality, as exemplified by the organization of museum collections according to the nationality of artists, or by the Venice Biennale -- the Olympics of the art world, with its national pavilions format.

    National culture, however, is about more than museum collections or promoting tourism. It broadly legitimizes state power by emotionally binding citizens to a self-understood identity. This identity helps ensure a continuing supply of military recruits to fight for the preservation of the state. Sociologist James Davison Hunter, who popularized the phrase “culture war,” stresses that culture is used to justify violence to defend these identities.9 We saw an example of this on January 6, 2021, with the storming of the US Capitol. Many of those involved were motivated by a desire to defend a certain idea of cultural identity they believed was under threat.

    Military priorities were also entangled with the origins of the tech industry. The US Department of Defense funded ARPANET, the first version of the internet. But the internet wouldn’t have become what it is today without the influence of both West Coast counterculture and small-l libertarianism, which saw the early internet as primarily a space to connect and play. One of the first digital game designers was Bernie De Koven, founder of the Games Preserve Foundation. A noted game theorist, he was inspired by Stewart Brand’s interest in “play-ins” to start a center dedicated to play. Brand had envisioned play-ins as an alternative form of protest against the Vietnam War; they would be their own “soft war” of subversion against the military.10 But the rise of digital surveillance as the business model of nascent tech corporations would hack this anti-establishment spirit, turning instruments of social cohesion and connection into instruments of control.

    It’s this counterculture side of tech’s lineage, which advocated for the social value of play, that attuned the tech industry to the utility of culture. We see the commingling of play and military control in Brand’s Whole Earth Catalog, which was a huge influence on early tech culture. Described as “a kind of Bible for counterculture technology,” the Whole Earth Catalog was popular with the first generation of internet engineers, and established crucial “assumptions about the ideal relationships between information, technology, and community.”11 Brand’s 1972 Rolling Stone article “Spacewar: Fantastic Life and Symbolic Death Among the Computer” further emphasized how rudimentary video games were central to the engineering community. These games were wildly popular at leading engineering research centers: Stanford, MIT, ARPA, Xerox, and others. This passion for gaming as an expression of technical skills and a way for hacker communities to bond led to the development of MUD (Multi-User Dungeon) programs, which enabled multiple people to communicate and collaborate online simultaneously.

    The first MUD was developed in 1978 by engineers who wanted to play fantasy games online. It applied the early-internet ethos of decentralism and personalization to video games, making it a precursor to massive multiplayer online role-playing games and modern chat rooms and Facebook groups. Today, these video games and game-like simulations -- now a commercial industry worth around $200 billion12 -- serve as important recruitment and training tools for the military.13 The history of the tech industry and culture is full of this tension between the internet as an engineering plaything and as a surveillance commodity.

    Historically, infrastructure businesses -- like railroad companies in the nineteenth-century US -- have always wielded considerable power. Internet companies that are also infrastructure businesses combine commercial interests with influence over national and individual security. As we transitioned from railroad tycoons connecting physical space to cloud computing companies connecting digital space, the pace of technological development put governments at a disadvantage. The result is that corporations now lead the development of new tech (a reversal from the ARPANET days), and governments follow, struggling to modernize public services in line with the new tech. Companies like Microsoft are functionally providing national cybersecurity. Starlink, Elon Musk’s satellite internet service, is a consumer product that facilitates military communications for the war in Ukraine. Traditionally, this kind of service had been restricted to selected users and was the purview of states.14 Increasingly, it is clear that a handful of transnational companies are using their technological advantages to consolidate economic and political power to a degree previously afforded to only great-power nations.

    Worse, since these companies operate across multiple countries and regions, there is no regulatory body with the jurisdiction to effectively constrain them. This transition of authority from states to corporations and the nature of surveillance as the business model of the internet rewrites social contracts between national governments and their citizens. But it also also blurs the lines among citizen, consumer, and worker. An example of this are Google’s Recaptchas, visual image puzzles used in cybersecurity to “prove” that the user is a human and not a bot. While these puzzles are used by companies and governments to add a layer of security to their sites, their value is in how they record a user’s input in solving the puzzles to train Google’s computer vision AI systems. Similarly, Microsoft provides significant cybersecurity services to governments while it also trains its AI models on citizens’ conversations with Bing.15 Under this dyanmic, when citizens use digital tools and services provided by tech companies, often to access government webpages and resources, they become de facto free labor for the tech companies providing them. The value generated by this citizen-user-laborer stays with the company, as it is used to develop and refine their products. In this new blurred reality, the relationships among corporations, governments, power, and identity are shifting. Our social and cultural infrastructure suffers as a result, creating a new kind of technical debt of social and cultural infrustructure.

    In the field of software development, technical debt refers to the future cost of ignoring a near-term engineering problem.16 Technical debt grows as engineers implement short-term patches or workarounds, choosing to push the more expensive and involved re-engineering fixes for later. This debt accrues over time, to be paid back in the long term. The result of a decision to solve an immediate problem at the expense of the long-term one effectively mortgages the future in favor of an easier present. In terms of cultural and social infrastructure, we use the same phrase to refer to the long-term costs that result from avoiding or not fully addressing social needs in the present. More than a mere mistake, socio-technical debt stems from willfully not addressing a social problem today and leaving a much larger problem to be addressed in the future.

    For example, this kind of technical debt was created by the cratering of the news industry, which relied on social media to drive traffic -- and revenue -- to news websites. When social media companies adjusted their algorithms to deprioritize news, traffic to news sites plummeted, causing an existential crisis for many publications.17 Now, traditional news stories make up only 3 percent of social media content. At the same time, 66 percent of people ages eighteen to twenty-four say they get their “news” from TikTok, Facebook, and Twitter.18 To be clear, Facebook did not accrue technical debt when it swallowed the news industry. We as a society are dealing with technical debt in the sense that we are being forced to pay the social cost of allowing them to do that.

    One result of this shift in information consumption as a result of changes to the cultural infrastructure of social media is the rise in polarization and radicalism. So by neglecting to adequately regulate tech companies and support news outlets in the near term, our governments have paved the way for social instability in the long term. We as a society also have to find and fund new systems to act as a watchdog over both corporate and governmental power.

    Another example of socio-technical debt is the slow erosion of main streets and malls by e-commerce.19 These places used to be important sites for physical gathering, which helped the shops and restaurants concentrated there stay in business. But e-commerce and direct-to-consumer trends have undermined the economic viability of main streets and malls, and have made it much harder for small businesses to survive. The long-term consequence of this to society is the hollowing out of town centers and the loss of spaces for physical gathering -- which we will all have to pay for eventually.

    The faltering finances of museums will also create long-term consequences for society as a whole, especially in the US, where Museums mostly depend on private donors to cover operational costs. But a younger generation of philanthropists is shifting its giving priorities away from the arts, leading to a funding crisis at some institutions.20

    One final example: libraries. NYU Sociologist Eric Klinenberg called libraries “the textbook example of social infrastructure in action.”21 But today they are stretched to the breaking point, like museums, main streets, and news media. In New York City, Mayor Eric Adams has proposed a series of severe budget cuts to the city’s library system over the past year, despite having seen a spike in usage recently. The steepest cuts were eventually retracted, but most libraries in the city have still had to cancel social programs and cut the number of days they’re open.22 As more and more spaces for meeting in real life close, we increasingly turn to digital platforms for connection to replace them. But these virtual spaces are optimized for shareholder returns, not public good.

    Just seven companies -- Alphabet (the parent company of Google), Amazon, Apple, Meta, Microsoft, Nvidia and Tesla -- drove 60 percent of the gains of the S&P stock market index in 2023.23 Four -- Alibaba, Amazon, Google, and Microsoft -- deliver the majority of cloud services.24 These companies have captured the delivery of digital and physical goods and services. Everything involved with social media, cloud computing, groceries, and medicine is trapped in their flywheels, because the constellation of systems that previously put the brakes on corporate power, such as monopoly laws, labor unions, and news media, has been eroded. Product dependence and regulatory capture have further undermined the capacity of states to respond to the rise in corporate hard and soft power. Lock-in and other anticompetitive corporate behavior have prevented market mechanisms from working properly. As democracy falls into deeper crisis with each passing year, policy and culture are increasingly bent towards serving corporate interest. The illusion that business, government, and culture are siloed sustains this status quo.

    Our digitized global economy has made us all participants in the international data trade, however reluctantly. Though we are aware of the privacy invasions and social costs of digital platforms, we nevertheless participate in these systems because we feel as though we have no alternative -- which itself is partly the result of tech monopolies and the lack of competition.

    Now, the ascendence of AI is thrusting big data into a new phase and new conflicts with social contracts. The development of bigger, more powerful AI models means more demand for data. Again, massive wholesale extractions of culture are at the heart of these efforts.25 As AI researchers and artists Kate Crawford and Vladan Joler explain in the catalog to their exhibition Calculating Empires, AI developers require “the entire history of human knowledge and culture ... The current lawsuits over generative systems like GPT and Stable Diffusion highlight how completely dependent AI systems are on extracting, enclosing, and commodifying the entire history of cognitive and creative labor.”26

    Permitting internet companies to hack the systems in which culture is produced and circulates is a short-term trade-off that has proven to have devastating long-term consequences. When governments give tech companies unregulated access to our social and cultural infrastructure, the social contract becomes biased towards their profit. When we get immediate catharsis through sharing memes or engaging in internet flamewars, real protest is muzzled. We are increasing our collective socio-technical debt by ceding our social and cultural infrastructure to tech monopolies.

    Cultural expression is fundamental to what makes us human. It’s an impulse, innate to us as a species, and this impulse will continue to be a gold mine to tech companies. There is evidence that AI models trained on synthetic data -- data produced by other AI models rather than humans -- can corrupt these models, causing them to return false or nonsensical answers to queries.27 So as AI-produced data floods the internet, data that is guaranteed to have been derived from humans becomes more valuable. In this context, our human nature, compelling us to make and express culture, is the dream of digital capitalism. We become a perpetual motion machine churning out free data. Beholden to shareholders, these corporations see it as their fiduciary duty -- a moral imperative even -- to extract value from this cultural life.

    We are in a strange transition. The previous global order, in which states wielded ultimate authority, hasn’t quite died. At the same time, large corporations have stepped in to deliver some of the services abandoned by states, but at the price of privacy and civic well-being. Increasingly, corporations provide consistent, if not pleasant, economic and social organization. Something similar occurred during the Gilded Age in the US (1870s -- 1890s). But back then, the influence of robber barons was largely constrained to the geographies in which they operated, and their services (like the railroad) were not previously provided by states. In our current transitionary period, public life worldwide is being reimagined in accordance with corporate values. Amidst a tug-of-war between the old state-centric world and the emerging capital-centric world, there is a growing radicalism fueled partly by frustration over social and personal needs going unmet under a transnational order that is maximized for profit rather than public good.

    Culture is increasingly divorced from national identity in our globalized, fragmented world. On the positive side, this decoupling can make culture more inclusive of marginalized people. Other groups, however, may perceive this new status quo as a threat, especially those facing a loss of privilege. The rise of white Christian nationalism shows that the right still regards national identity and culture as crucial -- as potent tools in the struggle to build political power, often through anti-democratic means. This phenomenon shows that the separation of cultural identity from national identity doesn’t negate the latter. Instead, it creates new political realities and new orders of power.

    Nations issuing passports still behave as though they are the definitive arbiters of identity. But culture today -- particularly the multiverse of internet cultures -- exposes how this is increasingly untrue. With government discredited as an ultimate authority, and identity less and less connected to nationality, we can find a measure of hope for navigating the current transition in the fact that culture is never static. New forms of resistance are always emerging. But we must ask ourselves: Have the tech industry’s overwhelming surveillance powers rendered subversion impossible? Or does its scramble to gather all the world’s data offer new possibilities to hack the system?



    1. McKenzie Wark, A Hacker Manifesto (Harvard University Press, 2004), thesis 126. ↑

    2. Jon Katz, “Birth of a Digital Nation,” Wired, April 1, 1997. ↑

    3. Marcin Szczepanski, “Is Data the New Oil? Competition Issues in the Digital Economy,” European Parliamentary Research Service, January 2020. ↑

    4. Bruce Schneier, A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back (W. W. Norton & Sons, 2023). ↑

    5. Lucie Levine, “Was Modern Art Really a CIA Psy-Op?” JStor Daily, April 1, 2020. ↑

    6. Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (W. W. Norton & Sons, 2015). ↑

    7. Adam Mosseri, “Shedding More Light on How Instagram Works,” Instagram Blog, June 8, 2021; Ben Smith, “How TikTok Reads Your Mind,” New York Times, December 5, 2021. ↑

    8. Giacomo Figà Talamanca and Selene Arfini, “Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers,” Philosophy & Technology 35, no. 1 (2022). ↑

    9. Zack Stanton, “How the ‘Culture War’ Could Break Democracy,” Politico, May 5, 2021. ↑

    10. Jason Johnson, “Inside the Failed, Utopian New Games Movement,” Kill Screen, October 25, 2013. ↑

    11. Fred Turner, “Taking the Whole Earth Digital,” chap. 4 in From Counter Culture to Cyberculture: Stewart Brand, The Whole Earth Network, and the Rise of Digital Utopianism (University of Chicago Press, 2006). ↑

    12. Kaare Ericksen, “The State of the Video Games Industry: A Special Report,” Variety, February 1, 2024. ↑

    13. Rosa Schwartzburg, “The US Military Is Embedded in the Gaming World. It’s Target: Teen Recruits,” The Guardian, February 14, 2024; Scott Kuhn, “Soldiers Maintain Readiness Playing Video Games,” US Army, April 29, 2020; Katie Lange, “Military Esports: How Gaming Is Changing Recruitment & Moral,” US Department of Defense, December 13, 2022. ↑

    14. Shaun Waterman, “Growing Commercial SATCOM Raises Trust Issues for Pentagon,” Air & Space Forces Magazine, April 3, 2024. ↑

    15. Geoffrey A Fowler, “Your Instagrams Are Training AI. There’s Little You Can Do About It,” Washington Post, September 27, 2023. ↑

    16. Zengyang Li, Paris Avgeriou, and Peng Liang, “A Systematic Mapping Study on Technical Debt and Its Management,” Journal of Systems and Software, December 2014. ↑

    17. David Streitfeld, “How the Media Industry Keeps Losing the Future,” New York Times, February 28, 2024. ↑

    18. “The End of the Social Network,” The Economist, February 1, 2024; Ollie Davies, “What Happens If Teens Get Their News From TikTok?” The Guardian, February 22, 2023. ↑

    19. Eric Jaffe, “Quantifying the Death of the Classic American Main Street,” Medium, March 16, 2018. ↑

    20. Julia Halprin, “The Hangover from the Museum Party: Institutions in the US Are Facing a Funding Crisis,” Art Newspaper, January 19, 2024. ↑

    21. Quoted in Pete Buttigieg, “The Key to Happiness Might Be as Simple as a Library or Park,” New York Times, September 14, 2018. ↑

    22. Jeffery C. Mays and Dana Rubinstein, “Mayor Adams Walks Back Budget Cuts Many Saw as Unnecessary,” New York Times, April 24, 2024. ↑

    23. Karl Russell and Joe Rennison, “These Seven Tech Stocks Are Driving the Market,” New York Times, January 22, 2024. ↑

    24. Ian Bremmer, “How Big Tech Will Reshape the Global Order,” Foreign Affairs, October 19, 2021. ↑

    25. Nathan Sanders and Bruce Schneier, “How the ‘Frontier’ Became the Slogan for Uncontrolled AI,” Jacobin, February 27, 2024. ↑

    26. Kate Crawford and Vladan Joler, Calculating Empires: A Genealogy of Technology and Power, 1500 -- 2025 (Fondazione Prada, 2023), 9. Exhibition catalog. ↑

    27. Rahul Rao, “AI Generated Data Can Poison Future AI Models,” Scientific American, July 28, 2023. ↑

    This essay was written with Kim Córdova, and was originally published in e-flux.

    ** *** ***** ******* *********** *************
    New Blog Moderation Policy

    [2024.06.19] There has been a lot of toxicity in the comments section of this blog. Recently, we’re having to delete more and more comments. Not just spam and off-topic comments, but also sniping and personal attacks. It’s gotten so bad that I need to do something.

    My options are limited because I’m just one person, and this website is free, ad-free, and anonymous. I pay for a part-time moderator out of pocket; he isn’t able to constantly monitor comments. And I’m unwilling to require verified accounts.

    So starting now, we will be pre-screening comments and letting through only those that 1) are on topic, 2) contribute to the discussion, and 3) don’t attack or insult anyone. The standard is not going to be “well, I guess this doesn’t technically quite break a rule,” but “is this actually contributing.”

    Obviously, this is a subjective standard; sometimes good comments will accidentally get thrown out. And the delayed nature of the screening will result in less conversation and more disjointed comments. Those are costs, and they’re significant ones. But something has to be done, and I would like to try this before turning off all comments.

    I am going to disable comments on the weekly squid posts. Topicality is too murky on an open thread, and these posts are especially hard to keep on top of.

    Comments will be reviewed and published when possible, usually in the morning and evening. Sometimes it will take longer. Again, the moderator is part time, so please be patient.

    I apologize to all those who have just kept commenting reasonably all along. But I’ve received three e-mails in the past couple of months about people who have given up on comments because of the toxicity.

    So let’s see if this works. I’ve been able to maintain an anonymous comment section on this blog for almost twenty years. It’s kind of astounding that it’s worked as long as it has. Maybe its time is up.

    ** *** ***** ******* *********** *************
    Recovering Public Keys from Signatures

    [2024.06.20] Interesting summary of various ways to derive the public key from digitally signed files.

    Normally, with a signature scheme, you have the public key and want to know whether a given signature is valid. But what if we instead have a message and a signature, assume the signature is valid, and want to know which public key signed it? A rather delightful property if you want to attack anonymity in some proposed “everybody just uses cryptographic signatures for everything” scheme.

    ** *** ***** ******* *********** *************
    Ross Anderson’s Memorial Service

    [2024.06.21] The memorial service for Ross Anderson will be held on Saturday, at 2:00 PM BST. People can attend remotely on Zoom. (The passcode is “L3954FrrEF”.)

    ** *** ***** ******* *********** *************
    Paul Nakasone Joins OpenAI’s Board of Directors

    [2024.06.24] Former NSA Director Paul Nakasone has joined the board of OpenAI.

    ** *** ***** ******* *********** *************
    Breaking the M-209

    [2024.06.25] Interesting paper about a German cryptanalysis machine that helped break the US M-209 mechanical ciphering machine.

    The paper contains a good description of how the M-209 works.

    EDITED TO ADD (7/14): M-209 simulation.

    ** *** ***** ******* *********** *************
    The US Is Banning Kaspersky

    [2024.06.26] This move has been coming for a long time.

    The Biden administration on Thursday said it’s banning the company from selling its products to new US-based customers starting on July 20, with the company only allowed to provide software updates to existing customers through September 29. The ban -- the first such action under authorities given to the Commerce Department in 2019 -- follows years of warnings from the US intelligence community about Kaspersky being a national security threat because Moscow could allegedly commandeer its all-seeing antivirus software to spy on its customers.

    ** *** ***** ******* *********** *************
    Security Analysis of the EU’s Digital Wallet

    [2024.06.27] A group of cryptographers have analyzed the eiDAS 2.0 regulation (electronic identification and trust services) that defines the new EU Digital Identity Wallet.

    ** *** ***** ******* *********** *************
    James Bamford on Section 702 Extension

    [2024.06.28] Longtime NSA-watcher James Bamford has a long article on the reauthorization of Section 702 of the Foreign Intelligence Surveillance Act (FISA).

    ** *** ***** ******* *********** *************
    Model Extraction from Neural Networks

    [2024.07.01] A new paper, “Polynomial Time Cryptanalytic Extraction of Neural Network Models,” by Adi Shamir and others, uses ideas from differential cryptanalysis to extract the weights inside a neural network using specific queries and their results. This is much more theoretical than practical, but it’s a really interesting result.

    Abstract:

    Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons). In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time. We demonstrate its practical efficiency by applying it to a full-sized neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million neuronal parameters. An attack following the approach by Carlini et al. requires an exhaustive search over 2^256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.

    ** *** ***** ******* *********** *************
    Public Surveillance of Bars

    [2024.07.02] This article about an app that lets people remotely view bars to see if they’re crowded or not is filled with commentary -- on both sides -- about privacy and openness.

    ** *** ***** ******* *********** *************
    Upcoming Book on AI and Democracy

    [2024.07.02] If you’ve been reading my blog, you’ve noticed that I have written a lot about AI and democracy, mostly with my co-author Nathan Sanders. I am pleased to announce that we’re writing a book on the topic.

    This isn’t a book about deep fakes, or misinformation. This is a book about what happens when AI writes laws, adjudicates disputes, audits bureaucratic actions, assists in political strategy, and advises citizens on what candidates and issues to support. It’s a book that tries to look into what an AI-assisted democratic system might look like, and then at how to best ensure that we make use of the good parts while avoiding the bad parts.

    This is what I talked about in my RSA Conference speech last month, which you can both watch and read. (You can also read earlier attempts at this idea.)

    The book will be published by MIT Press sometime in fall 2025, with an open-access digital version available a year after that. (It really can’t be published earlier. Nothing published this year will rise above the noise of the US presidential election, and anything published next spring will have to go to press without knowing the results of that election.)

    Right now, the organization of the book is in six parts:

    AI-Assisted Politicians

    AI-Assisted Legislators

    The AI-Assisted Administration

    The AI-Assisted Legal System

    AI-Assisted Citizens

    Getting the Future We Want

    It’s too early to share a more detailed table of contents, but I would like help thinking about titles. Below are my current list of brainstorming ideas: both titles and subtitles. Please mix and match, or suggest your own in the comments. No idea is too far afield, because anything can spark more ideas.

    Titles:

    AI and Democracy

    Democracy with AI

    Democracy after AI

    Democratia ex Machina

    Democracy ex Machina

    E Pluribus, Machina

    Democracy and the Machines

    Democracy with Machines

    Building Democracy with Machines

    Democracy in the Loop

    We the People + AI

    Artificial Democracy

    AI Enhanced Democracy

    The State of AI

    Citizen AI

    Trusting the Bots

    Trusting the Computer

    Trusting the Machine

    The End of the Beginning

    Sharing Power

    Better Run

    Speed, Scale, Scope, and Sophistication

    The New Model of Governance

    Model Citizen

    Artificial Individualism

    Subtitles:

    How AI Upsets the Power Balances of Democracy

    Twenty (or So) Ways AI will Change Democracy

    Reimagining Democracy for the Age of AI

    Who Wins and Loses

    How Democracy Thrives in an AI-Enhanced World

    Ensuring that AI Enhances Democracy and Doesn’t Destroy It

    How AI Will Change Politics, Legislating, Bureaucracy, Courtrooms, and Citizens

    AI’s Transformation of Government, Citizenship, and Everything In-Between

    Remaking Democracy, from Voting to Legislating to Waiting in Line

    How to Make Democracy Work for People in an AI Future

    How AI Will Totally Reshape Democracies and Democratic Institutions

    Who Wins and Loses when AI Governs

    How to Win and Not Lose With AI as a Partner

    AI’s Transformation of Democracy, for Better and for Worse

    How AI Can Improve Society and Not Destroy It

    How AI Can Improve Society and Not Subvert It

    Of the People, for the People, with a Whole lot of AI

    How AI Will Reshape Democracy

    How the AI Revolution Will Reshape Democracy

    Combinations:

    Imagining a Thriving Democracy in the Age of AI: How Technology Enhances Democratic Ideals and Nurtures a Society that Serves its People

    Making Model Citizens: How to Put AI to Use to Help Democracy

    Modeling Citizenship: Who Wins and Who Loses when AI Transforms Democracy

    A Model for Government: Democracy with AI, and How to Make it Work for Us

    AI of, By, and for the People: How Artificial Intelligence will reshape Democracy

    The (AI) Political Revolution: Speed, Scale, Scope, Sophistication, and our Democracy

    Speed, Scale, Scope, Sophistication: The AI Democratic Revolution

    The Artificial Political Revolution: X Ways AI will Change Democracy...Forever

    EDITED TO ADD (7/10): More options:

    The Silicon Realignment: The Future of Political Power in a Digital World

    Political Machines

    EveryTHING is political

    ** *** ***** ******* *********** *************
    New Open SSH Vulnerability

    [2024.07.03] It’s a serious one:

    The vulnerability, which is a signal handler race condition in OpenSSH’s server (sshd), allows unauthenticated remote code execution (RCE) as root on glibc-based Linux systems; that presents a significant security risk. This race condition affects sshd in its default configuration.

    [...]

    This vulnerability, if exploited, could lead to full system compromise where an attacker can execute arbitrary code with the highest privileges, resulting in a complete system takeover, installation of malware, data manipulation, and the creation of backdoors for persistent access. It could facilitate network propagation, allowing attackers to use a compromised system as a foothold to traverse and exploit other vulnerable systems within the organization.

    Moreover, gaining root access would enable attackers to bypass critical security mechanisms such as firewalls, intrusion detection systems, and logging mechanisms, further obscuring their activities. This could also result in significant data breaches and leakage, giving attackers access to all data stored on the system, including sensitive or proprietary information that could be stolen or publicly disclosed.

    This vulnerability is challenging to exploit due to its remote race condition nature, requiring multiple attempts for a successful attack. This can cause memory corruption and necessitate overcoming Address Space Layout Randomization (ASLR). Advancements in deep learning may significantly increase the exploitation rate, potentially providing attackers with a substantial advantage in leveraging such security flaws.

    The details. News articles. CVE data. Slashdot thread.

    ** *** ***** ******* *********** *************
    On the CSRB’s Non-Investigation of the SolarWinds Attack

    [2024.07.08] ProPublica has a long investigative article on how the Cyber Safety Review Board failed to investigate the SolarWinds attack, and specifically Microsoft’s culpability, even though they were directed by President Biden to do so.

    ** *** ***** ******* *********** *************
    Reverse-Engineering Ticketmaster’s Barcode System

    [2024.07.09] Interesting:

    By reverse-engineering how Ticketmaster and AXS actually make their electronic tickets, scalpers have essentially figured out how to regenerate specific, genuine tickets that they have legally purchased from scratch onto infrastructure that they control. In doing so, they are removing the anti-scalping restrictions put on the tickets by Ticketmaster and AXS.

    EDITED TO ADD (7/14): More information.

    ** *** ***** ******* *********** *************
    RADIUS Vulnerability

    [2024.07.10] New attack against the RADIUS authentication protocol:

    The Blast-RADIUS attack allows a man-in-the-middle attacker between the RADIUS client and server to forge a valid protocol accept message in response to a failed authentication request. This forgery could give the attacker access to network devices and services without the attacker guessing or brute forcing passwords or shared secrets. The attacker does not learn user credentials.

    This is one of those vulnerabilities that comes with a cool name, its own website, and a logo.

    News article. Research paper.

    ** *** ***** ******* *********** *************
    Apple Is Alerting iPhone Users of Spyware Attacks

    [2024.07.11] Not a lot of details:

    Apple has issued a new round of threat notifications to iPhone users across 98 countries, warning them of potential mercenary spyware attacks. It’s the second such alert campaign from the company this year, following a similar notification sent to users in 92 nations in April.

    ** *** ***** ******* *********** *************
    The NSA Has a Long-Lost Lecture by Adm. Grace Hopper

    [2024.07.12] The NSA has a video recording of a 1982 lecture by Adm. Grace Hopper titled “Future Possibilities: Data, Hardware, Software, and People.” The agency is (so far) refusing to release it.

    Basically, the recording is in an obscure video format. People at the NSA can’t easily watch it, so they can’t redact it. So they won’t do anything.

    With digital obsolescence threatening many early technological formats, the dilemma surrounding Admiral Hopper’s lecture underscores the critical need for and challenge of digital preservation. This challenge transcends the confines of NSA’s operational scope. It is our shared obligation to safeguard such pivotal elements of our nation’s history, ensuring they remain within reach of future generations. While the stewardship of these recordings may extend beyond the NSA’s typical purview, they are undeniably a part of America’s national heritage.

    Surely we can put pressure on them somehow.

    ** *** ***** ******* *********** *************
    Upcoming Speaking Engagements

    [2024.07.14] This is a current list of where and when I am scheduled to speak:

    I’m speaking -- along with John Bruce, the CEO and Co-founder of Inrupt -- at the 18th Annual CDOIQ Symposium in Cambridge, Massachusetts, USA. The symposium runs from July 16 through 18, 2024, and my session is on Tuesday, July 16 at 3:15 PM. The symposium will also be livestreamed through the Whova platform.
    I’m speaking on “Reimagining Democracy in the Age of AI” at the Bozeman Library in Bozeman, Montana, USA, July 18, 2024. The event will also be available via Zoom.
    I’m speaking at the TEDxBillings Democracy Event in Billings, Montana, USA, on July 19, 2024.

    The list is maintained on this page.

    ** *** ***** ******* *********** *************

    Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.

    You can also read these articles on my blog, Schneier on Security.

    Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

    Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, A Hacker’s Mind -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

    Copyright © 2024 by Bruce Schneier.

    ** *** ***** ******* *********** *************
    ---
    * Origin: High Portable Tosser at my node (21:1/229)