Hey, good morning. Thanks for coming out. Let's talk about the tax systemas an IT security problem. It has code. It's just a series ofalgorithms that take and inputs tax information forthe year and produces some outputs, theamount of tax owed. It's incrediblycomplex code. It consists of laws,government laws, tax authority rulings, judicialdecisions, lawyer opinions. There are bugsin the code. There are mistakes in howthe law is written, how it's interpreted. Some of those bugs arevulnerabilities, and attackers look forexploitable vulnerabilities. We call themtax loopholes. Right? Attackers exploitthese vulnerabilities. We call it tax avoidance. And vulnerabilities,loopholes are everywhere in the tax code. And, actually, there arethousands of black hat security researchers thatexamine every line of the tax code looking forvulnerabilities. We call themtax attorneys. Some of thesebugs are mistakes. There is -- in the 2017tax law, there was an actual mistake, a typo,that categorized military death benefits as earnedincome, and as a result, surviving family membersgot unexpected tax bills of $10,000 or more. Some of these areemergent properties. There is the, I'm going toread it, the double Irish with a Dutch sandwich. This is the trickthat lets U.S. companies like Google andApple avoid paying U.S. tax, and, actually,Google is possibly being prosecuted forthat right now. Some of thesevulnerabilities are deliberately created inthe tax code by lobbyists trying to gain someadvantage to their industry. Sometimes a legislator knowsabout it; sometimes they don't. I guess this is analogousto a government sneaking a programmer into Microsoft todrop a vulnerability in Windows. All right, so thisis my big idea. We here in our communityhave developed some very effective techniques to dealwith code, to deal with tech. We started by examiningpurely technical systems. Increasingly, we studysociotechnical systems. Can our expertise in ITsecurity transfer to broader social systemslike the tax code, like the system we use tochoose our elected officials, likethe market economy? Is our way of thinking,our analytical framework, our procedural mindset valuablein this broader context? Can we hack society? And, actually, moreimportantly, can we help secure the systemsthat make up society? So back to the tax code. We know how to fixthis problem before the code is deployed. Secure developmentprocesses, source code audits. How do we do thatfor the tax code? Like, who does it? Who pays for it? And what aboutthose deliberate vulnerabilities? We know how to fix theproblem with running code. Vulnerability finding bywhite hat researchers, bug bounties, patching. How do you patchthe tax code? How do you create laws andpolicies to implement the notion of patching? I mean right now passingtax legislation is a big deal politically. And here's the bigquestion: Can we design a security system todeal with bugs and vulnerabilities in thetax code and then build procedures toimplement it? So security technologistshave a certain way of looking at the world. It's systems thinking withan adversarial mindset. I call it ahacker mindset. We think about how systemsfail, how they can be made to fail, and we thinkabout everything in this way. And we've developed whatI think is a unique skill set: Understandingtechnical systems with human dimensions,understanding sociotechnical systems,thinking about these systems in an adversarialmindset, adaptive malicious adversaries, andunderstanding the security of complex adaptivesystems, and understanding iterativesecurity solutions. This way of thinkinggeneralizes, and it's my contention that the worldsof tech and policy are converging, that the taxcode is now becoming actual code, and thatwhere once purely systems are increasesociotechnical systems. And as society's systemsbecome more complex, as the world looks more likea computer, our security skills become morebroadly applicable. So that'sbasically my talk. It's preliminary work. I have a lot of examplesand a lot of detail. I'm going to throw abunch of stuff at you. And I want to convince youthat we have this unique framework for solvingsecurity problems, and there are new domainswe can apply them to. I guess I want to put acaveat here in the beginning. I don't want to say thattech can fix everything. This isn'ttechnological solutionism. This isn't SiliconValley saving the world. This is a way that I thinkwe can blend tech and policy in a new way. All right. So to do this, we need tobroaden some definitions. Let's talk about a hack. A hack is something asystem allows but is unwanted and unanticipatedby the system designers. More than that, it is anexploitation of the system. Something desired by theattacker at the expense of some other partof the system. So in his memoirs, EdwardSnowden writes that the U.S. intelligence communityhacked the constitution in order to justifymass surveillance. We can argue whetherthat's true or not, but everyone here intuitivelyknows what he means by that. Other examples of hacks:So lack of standing is a hack the NSA used toavoid litigating the constitutionalityof their actions. Vulnerability, of course,is that there's a body of law out of reach ofconventional judicial review. Using the old writs actagainst Apple as the FBI did in 2016 is a hack. Maybe you thinkit's a good hack. All hacks aren't bad. But it is definitelyan unintended and unanticipated useout of a 1789 law. So this all makessense to me in my head. And my guess is itmakes some sense to you, but is it useful? I think it is. I think this way oflooking at the world can usefully informpolicy decisions. Let's talk about hackingthe legislative process. Bills now are socomplicated that no one who votes on them trulyunderstands them. You just add one sentenceto a bill, it makes references to other laws,and the combination results in somespecific outcome unknown to most everyone. And there's a wholeindustry dedicated to engineering theseunanticipated consequences. It sounds likespaghetti code. We can think of VC fundingas a hack of market economics. So markets are based onknowledgeable buyers making decisions amongstcompeting products. The pressure to sell tothose buyers depresses prices and incentsinnovation. That's basically themechanic of the markets. VC funding hacksthat process. The external injection ofmoney means that companies don't have to compete inthe traditional manner. The best strategy fora start-up is to take enormous risk to besuccessful, because otherwise they're dead,and they can destroy without providing viablealternatives as long as they have that externalfunding source to do it. And this is avulnerability in the market system, whichmakes Uber a hack. Right? VC funding means they canlose $0.41 on every dollar until they destroythe taxi industry. WeWork is a hack. I guess was a hack. Are they still around? Their business modelloses $2.6 billion a year. We could look at money andpolitics as a similar example. The injection of private cashhacks the Democratic process. So think about marketsmore generally. They're really based onthree things: Information, choice, and agency. And they are allunder attack. Complex product offeringsobscure information. Just try to compareprices of cell phone programs or credit cards. Monopolies remove ourability to choose. Products and services wecan't reasonably live without depriveus of agency. There's probably anentire talk on this. So metaphors matter here. Most people don't considerour Democratic process or the market associotechnical systems. And I think this issimilar to us only thinking in termsof tech systems. Remember 15 years ago whenwe thought our security domain ended at thekeyboard and chair? Today we know that allcomputer systems are actually complexsociotechnical systems, that they are embedded. In systems, people say nestedin broader social systems. And it turns out allmodern systems are like that, too, just as thebalance between socio and technical are different. There's a differencebetween determinism and non-determinism thatI think matters here. A bug in softwareis deterministic. Who gets elected, worldevents, social trends, those arenon-deterministic. Users arenon-deterministic. Hackers arenon-deterministic. Determinism is a majoritycondition of computer systems. We in security deal withnon-determinism all the time and it's a majoritycondition in social systems. I think we need togeneralize non-determinism better, both in oursystems and in social systems. Also, what do weactually mean by a hack? In our world in computersecurity, we tend to work with conventionalsystems created for some purpose by someone. Social systems aren'treally like that. They evolve. New purposes emerge. A hack can be an emergentproperty; it's not clear whether they'regood or bad. There's a lot ofperspective that matters here. If VC funding is simplya way for the wealthy to invest their money,then it's the market working as intended. And it's not obvious to me howto handle this generalization. Another concept thatgeneralizes: Changes in the threat model. So we know how this works. A system is created forsome particular threat model and thenthings change. Maybe its uses changes,technology changes, circumstance changes, orjust a change in scale that causes achange in kind. So the old securityassumptions are no longer true. The threat model haschanged, but no one notices it, so the systemkind of slides into insecurity. I've heard political scientistscall this concept drift. So let's talk about achange in the threat model. Too big to fail. So this is a concept thatsome corporations are so big and so important tothe functioning of our society that they can'tbe allowed to fail. In 2008, U.S. governmentbailed out several major banks to the tune of $700 billionbecause of their very bad business decisions becausethey were too big to fail. The fear was if thegovernment didn't do that, the banks would collapseand take the economy with it. The banks are literallytoo big to be allowed to fail. Not the first time. In 1979, U.S. governmentbailed out Chrysler. Back then, it wasnational security. They were buildingthe M1 Abrams tank. It was jobs, saving700,000 jobs, saving suppliers and the wholeecosystem, and there was an auto trade war going onwith Japan at the time. So this is an emergentvulnerability. When the mechanisms ofthe market economy were invented, nothing couldever be that big. No one could conceive ofanything being that big. Our economic system isbased on an open market and relies on the factthat the cost of failing is paid by the entity failingand that guides behavior. That doesn't work ifyou're too big to fail. A company that's tradingoff private gains and public losses is notgoing to make the same decisions, and thisperturbs market economics. We can look at threat modelchanges in our political system. Election security. The U.S. system of securingelections is basically based on representativesof the two opposing parties sitting togetherand making sure none of them does anything bad. That made perfect senseagainst the threats in the mid-1800s. It is useless against modernthreats against elections. The apportioning ofrepresentatives. Gerrymandering is muchmore effective with modern surveillance systems. Like markets, Democracyis based on information, choice, and agency, andall three are under attack. So another thing we needto generalize is who the attackers and defenders are. So we know that the termattacker and defender doesn't carry moral weight. All security systemsare embedded in some broader social concept. We could havethe police attacking and criminals defending. We could havecriminals attacking and the police defending. To us, it's basicallyall the same tech. But normally ourattackers and defenders are in different groups. This isn't truewith the tax code or political gerrymandering. The attackers are members ofthe same society that's defending. The defenders are societyas a whole and the attackers are somesubset of them. Or worse, it's two groupstrying to game the same system, so each trying toimmunize the system to attacks by the other groupby leading vulnerable attacks to theirown attacks. And you can see this invoting rights where the different groups tryto attack and defend at the same time. It's more about abstractprinciples, notions of equality, justice,and fairness. And this gets back to ourdefinition of the word hack. When a lobbyist gets a lawpassed, have they hacked the system, or are theyjust using it as intended? All right. Some more examples. Let's talk about hacksof cognitive systems. Remember the securityadage that script kitties hack computers while smartattackers hack people? Lots of attackershack people. Advertising is a hack ofour cognitive system of choice. It's always been psychological;now it's scientific. Now it's targeted. Lots of people havewritten about modern behavioral advertising andhow it affects our ability to rationally choose. It feels likea hack to me. And kind of all of mymarket and democracy examples really bubble upto persuasion as a hack. Social media hacks ourattention by manufacturing outrage, bybeing addictive. And AI and roboticsare going to hack our cognitive systems becausewe all have a lot of cognitive shortcuts. Two over one is a face;a face is a creature; language indicatesintelligence, emotion, intention, and so on. These are all reallyreasonable cognitive shortcuts for theenvironment we are involved in and they willall fail with artificial people-like systems. All right. So this is alot of examples, but I really want to give you a feelfor sort of how I'm thinking about this. Let me talk about onething in a little more detail. So, last fall, I startedusing computer security techniques to studypropaganda and misinformation. So I did this work withpolitical scientist Henry Theral at GW University. Here's our thinking:Democracy can be thought of as an informationsystem, and we're using that to understandthe current waves of information attacks,specifically this question. How is it that the samedisinformation campaigns that act as a stabilizinginfluence in a country like Russia can bedestabilizing in the United States? And our answer isthat autocracies and democracies work differentlyas information systems. So let me explain. There are two types ofknowledge that society uses to solvepolitical problems. The first is what I callcommon political knowledge. That's information thatsociety broadly agrees on. It's things like who therulers are, how they're chosen, howgovernment functions. That's commonpolitical knowledge. Then there is contestedpolitical knowledge, and that's the stuffwe disagree about. So it's things like howmuch of a role should our government playin our economy? What sorts of regulationsare beneficial and what are harmful? What should thetax rates be? That's the stuffwe disagree about. That's contestedpolitical knowledge. So democracies andautocracies have different needs for common andcontested political knowledge. Democracies draw ondisagreements within their populations tosolve problems. That's how we work. But in order for it towork, there needs to be common political knowledgeon how governments function and how politicalleaders are chosen. All right? We have to know howelections work so we can campaign for our side. And through that process,we solve political problems. In an autocracy, you needcommon political knowledge over who is in charge,but they tend to suppress other common politicalknowledge about how the government is actuallyworking, about other political movementsand their support. They benefit from thosethings being contested. So that difference ininformation usage leads to a difference in threatmodels, which leads to a difference invulnerabilities. So authoritarian regimesare vulnerable to information attacks thatchallenge their monopoly on common political knowledge. That is why an openinternet is so dangerous to an autocracy. Democracies are vulnerableto information attacks that turn common politicalknowledge into contested political knowledge, whichis why you're seeing information attacks in theUnited States and Europe that try to cast doubt onthe fairness of elections, the fairness of the policeand courts, the fairness of the Census. The same informationattack, but they increase the stability in oneregime and decrease the stability in another. Here's another way ofsaying this: There is something in politicalscience called a dictator's dilemma and itkind of goes like this. As a dictator, you needaccurate information about how your country isrunning, but that accurate information is alsodangerous because it tells everybody how not wellyour country is running. So you're always tryingto balance this need for information with this needto suppress the information. There is a correspondingdemocracies dilemma, and that's this: It's the sameopen flows of information that are necessary fordemocracy to function are also potentialattack vectors. This feels like a usefulway of thinking about propaganda and it'ssomething we are continuing to develop. So let's hack some othercognitive systems. Fear. I've written years agothat our sense of fear is optimized for living insmall family groups in the East African highlands in100,000 BC and not well designed for 2020San Francisco. Terrorism directly targetsour cognitive shortcuts about fear. It's terrifying, vivid,spectacular, random. It's basically tailormadefor us to exaggerate the risk and overreact. Right? Trust. Our intuitions are basedon trusting individuals peer to peer, trustingorganizations, brands. It's not whatwe're used to. And this can be misused byothers to manipulate us. We naturallytrust authority. Something in printis an authority. The computer saidso is an authority. Lots of examples of thosetrust heuristics being attacked. You can even think ofjunk food as hacking our biological systems of fooddesirability because our security is based on our100,000-year-old diet, not on modern processedfood production. The change in the threat modelhas led to a vulnerability. I think any industrythat has been upended by technology is worthexamining from this perspective. Our system for choosingelected officials, not voting specifically,but election process in general, the newsindustry, distance learning andhigher education. Any social system that hasslipped into complexity is worthy of examination. The tech industry,of course, the media industry,financial markets. In all of these cases,differences in degree lead to differences in kind, andthey have security ramifications. We know this is truefor mass surveillance. I think it's true for alot of other things as well. The ability of people tocoordinate on the internet has changed thenature of attack. Remember the great -- Idon't know if it's great -- the story ofMicrosoft's chat bot Tay? Turned into a racist,misogynistic Nazi in less than 24 hours by acoordinated attack by Fortran. More recently, the peoplerunning the Democratic caucuses in Iowa didn'trealize that publicizing their help number wouldleave them vulnerable to denial of service attack. We have moved in a lot ofplaces from good faith systems to ones wherepeople and institutions behave strategically. And security against thatstuff is what we're good at. I think powermatters here. All of these hacks areabout rearranging power, just as cryptography isabout rearranging power. In her great book BetweenTruth and Power, Julie E. Cohen, law professor,wrote that in the realm of government, powerinterprets regulation as damage androutes around it. Once the powerfulunderstood that they had to hack the regulatoryprocess, they developed competence to do just that,and that impedes solutions. So elections area good example. I have already mentionedmoney and politics are changing the threat model. So most U.S. election spendingtakes place on television, secondarily on the internet. Now there are waysto regulate this. Other countries restrictadvertising to some small-time window, and thereare other things they do. But the platforms on whichthis debate would occur are the very onesthat profit most from political advertising. And power will fightsecurity if it's against their interests. Think about the FBIversus strong encryption. Those in power will fightto retain their power. So one last conceptI want to look at. The notion ofa class break. So, in general, and weknow the story, computers replace expertise andskill with an ability. You used to have to trainto be a calligrapher. Now you can useany font you want. Driving iscurrently a skill. How long will that last? This is also truefor security. One expert finds aZero-day, publishes it, now anyone can use it,especially if it's embedded in asoftware program. So this generalizes whenyou deal with complex sociotechnical systems. Someone invented thedouble Irish with a Dutch sandwich, but nowit's a class break. Once the loophole wasfound, any company can take advantage of it. Misinformation on socialnetworks is a class break. And Russia might haveinvented the techniques; now everyone can do it. Different techniques ofpsychological manipulation are class breaks. The notion of a classbreak drastically changes how we need tothink about risk. And I don't think that'ssomething well understood outside of our world. So we also need togeneralize the solutions we routinely use. I'll hit on a few of them. Transparency is a big one. And we see that in thegreater world, open government laws,mandatory public tax and informational filings,ingredient labels on products. Truth in lendingstatements on financial products reduce corporateexcesses, even if no one reads them. I think we can achieve alot through transparency. We have other solutions inour tech toolkit, defense in-depth,compartmentalization, isolation, segmenting,sandboxing, audit, incident response, patching. Iteration matters here. We know we never actuallysolve a security problem; we iterate. Is there some way to iteratelaw, to have extensible law? Can we implement somerapid feedback in our laws and regulations? Resilience is animportant concept. It's how we deal withsystems on a continuous attack, which is the normalsituation in social systems. So when I wrote BeyondFear back in 2003, I gave five steps to evaluatea security system. What are youtrying to protect? What are the risks? How well does yoursolution mitigate the risks? What other risks doesyour solution cause? And what are thenon-security trade-offs? I think we can generalizethat framework. Systems that aredecentralized and multiply controlled, they'rea lot harder to fix. But we haveexperience with that. We have a lot ofexperience with that. So all of this leadsto some big questions. What should policy for theinformation economy look like? What components willrule of law 2.0 have? What should economicinstitutions for the informationeconomy look like? Industrial areacapitalism is looking increasingly unlikely. How do we address theproblems that are baked into our technologicalinfrastructure without destroying whatit provides? And one problem I seeimmediately is we don't have policy institutionswith footprints to match the technologies. And Facebook is global, yetit's only regulated nationally. Those that have beenaround for a while remember when techused to be a solution; now it's the problem. In reality, it's both. And our problem tendsto be social problems masquerading as techproblems and tech solutions masqueradingas social solutions. And we need to betterintegrate tech and policy. Computer security has longintegrated tech and people. I think we can do this fora much broader set of systems. I think we need to upendthe idea that society is somehow solid, stable,and naturally just there. We build society. Increasingly, we buildit with technology. And technology is not onsome inevitable trajectory. It interacts with thecountry's political and social institutions. So it's not just oneeffective technology. It depends on thedetails of society. Computer security has alreadyhad an impact on technology. And now we need to havean impact on the broader public interest. So this is what I'mworking on right now. Currently, itis this talk. It will probably becomesome articles and essays. Maybe it'll be a book. I think this frameworkhas some value. It gives structure tothinking about adversaries inside a social system,how we delineate the rules of the game, how peoplehack the meta game, how they hack the metagame, andhow we can secure all of that. I think it's easy to getcarried away with this kind of thinking. All models are wrong,but some are useful is the great quote. Which systems areanalogous to network computers andwhich are not? When are innovationsanalogous to a hack with security implications andwhen are they just novel uses or innovationsor social progress? There are bugsin everything. When is a bug avulnerability? When is a vulnerabilitydeserving of attention? When is it catastrophic? There's probably a goodanalogy to cancer here. Everybody has cancerouscells in their body all the time, but mostcancers don't grow. It depends on theenvironment and other external factors. I think it's thesame in our field. The difference, of course,is that cancer cells are not intelligent malicious,adaptive adversaries, and that's who we're dealing with. I also think it'simportant to have humility in this endeavor. All the examples I usedare large policy issues with history and expertiseand a huge body of existing knowledge. And we just can't thinkthat we can barge in and solve the world's problemsjust because we're good at the problems inour own world. The literature is filledwith intellectuals who are experts in their field,overgeneralized, and fell flat on their face. Kind of wantto avoid that. And the last thing we wantis another tech can fix everything solution,especially coming from the monoculture of SiliconValley, at the expense and lives of, like,everybody else. I think we need a lotof people from a lot of disciplines workingtogether to solve any of this, but I like techto be involved in these broader conversations. So I once heard this quoteabout mathematical literacy. It's not that mathcan solve the world's problems; it's just thatthe world's problems would be easier to solve ifeveryone just knew a little more math. I think the same thingholds true for security. It's not that the securitymindset or security thinking will solve theworld's problems; it's just that the world'sproblems would be easier to solve if everyonejust understood a little more security. And this is important. So I have one finalexample about a hack against the tax code. In January, the New YorkTimes reported about this new kind of tax fraud. It's called cum extrading, which is Latin for with/without. I'm going to read asentence from the article. Through careful timingand the coordination of a dozen differenttransactions, cum ex trades produce two refundsfor dividend tax paid on one basket of stocks. That's one refund obtainedlegally and the second illegally received. It was a hack. This was somethingthe system permitted, unanticipated and unintendedby the system's creators. From 2006 to 2011, thebankers, lawyers, and investors who used thishack made off with $60 billion from EU countries. Right now, there areprosecutions, primarily in Germany, and it is unclearwhether the law was broken. The hack ispermitted by the system. They're debating whetherthere is some metasystem of don't do anything thisblatantly horrible that they can convict theperson of, or we have a vulnerability in our lawsthat we need to patch. So a year ago, I stood onthis same stage and talked about the need for publicinterest technologists, for technologists tounderstand the social ramifications of theirwork, for technologists to get involved in publicpolicy, to bridge the gap between tech and policy. So this is a piece of it. Hacking society andsecuring against those hacks is how we in thecomputer security field can use our expertise forbroader social progress. And I think wehave to do that. So thank you. >> BRUCE SCHNEIER: So Ileft a bunch of time for questions and commentsbecause I really want questions and comments. This is a work in progressand something I'm thinking about, so I'm curiouswhat you all think. There are two microphonesthat everyone is scared to get in front of. Here comes one person. And if you don't wantto get in front of a microphone, email me. If you have an idea, arebuttal, another example, send it to me. I'm really curious. I'll chase downthe details. But anything that issparked by this talk, please tell me. Yes? >> AUDIENCE: So I havean idea of comparing the social systems to systemwhen we come to securing both of those. I would like to get youropinion on what I call the permit hack, where in thesociety through fear of all of these kinds ofthings move into a situation where the actualobjectives of the system completely change, right? In IT, we only have onevariable where the system itself that we need todefend is very clear. And society of theobjective of what we are trying to do keepschanging, right? >> BRUCE SCHNEIER: Yeah.I think it's less different than you want. We like to think thatour systems end at the keyboard and screen, but,in fact, they don't. We are used to systems --internet was invented with one particular threatmodel and that completely changed, and we have aninternet designed for a very benign threatenvironment being used for critical infrastructure. We're used tosystem drift. I think we're used tosystems that expand. We often don't think inthat way, but those of us, I think, who do securitywell are constantly thinking about that. But, yes, I think there isa difference that social systems tendto evolve more. They are notdeliberately created. Who created themarket economy? Well, it kind of showed upwhen it was the right time. We know who created ourconstitution and we can look at their debates andreally learn about the threat model they werethinking about, the vulnerabilities they werelooking at, what they missed. But I think you're right. This is going to be atough expansion because of those fuzzy borders. I'm not sure of theanswer, but I think it's still worth poking at. >> DAN WOODS: Dan Woodsfrom Early Adopter Research. Do you know of any bookthat explains the kind of basics of political theoryfrom utilitarianism, lock, so all of these things fortechnologists so that this could be -- they couldthen do a better job of mapping what we know towhat the political and social frames know? >> BRUCE SCHNEIER:It's funny. I teach at the HarvardKennedy school and teach tech to policy kids. We're constantly writingthese papers like machine learning for policymakers. You want the other one,like social systems for techies. I don't know. That's a great idea. I have been readingpolitical theory books. The way you do this isyou go online, look for political theory classesat universities, and buy their textbooks. So I have beenreading a bunch. I don't remember names. But there are politicaltheory books that are used in these undergraduate classesthat go into all of that. They are notwritten for techies. They're written forhumanities majors. But one for a techie? That's a great idea. If you need a project, Iwould love to read it. >> DAN WOODS: Okay. >> BRUCE SCHNEIER:It's done. >> DAN WOODS:I'll get started. >> BRUCE SCHNEIER: Okay.Let me know next year. >> ALEX ZERBRINA: Hi, Bruce.Thank you for speaking today. My name is Alex Zerbrina. I am currently a studentat San Francisco State University, and I'm apolitical science major who specializesin terrorism. >> BRUCE SCHNEIER:All right.Tell me why I'm all wrong. >> ALEX ZERBRINA: Oh, no,no, no. I'm not saying that. >> BRUCE SCHNEIER: Thisis my nightmare scenario, someone whoknows something. >> ALEX ZERBRINA: No, no,I'm not going to tell you that. No, I'm not goingto tell you that. What I wanted to knowis that you spoke about terrorism and attacks thatseem to be random, but they are really not. How do you think we shouldprevent those attacks, especially if they usetechnology such as terrorists recruitingon, say, Twitter? >> BRUCE SCHNEIER: This isnot really the topic of the talk, but theanswer is you can't. I mean, you know, random actsof violence cannot be prevented. And that's, in a sense,why it's so fearful. And, unfortunately, Ithink the best we can do -- well, a lot of what wedo is we move it around. We block off certaintechniques and targets and force the terroriststo choose other techniques and targets. That largely doesn'twork very well. We do a lot of stuffagainst airplanes because airplanes are particularlydisastrous targets. Right? A bomb goes off in thisroom, and some people die, some will get injured,and everyone is okay. A bomb goes off on thisairplane and the airplane crashes and we all die. That has a particularfailure mode, which is why that is protected morethan other things. Once you get out ofairplanes, you're just moving around what theterrorists are doing, and that sends you upstreamto geopolitical solutions very quickly, that therest of the money is just expended in forcing thebad guys to change their tactics and target. I see you there. >> AUDIENCE: Hi. When we aretalking about security and securing organizations andsystems, we often raise up the security awareness. What about here if you tryto secure the society? What about the awarenessof the people and how to raise their levelof education? Because with low levelof education, they are certainly a target to adifferent kind of hacks like disinformationand stuff like that. >> BRUCE SCHNEIER: Ithink that's interesting. I haven't done a lot ofthinking about awareness as a security measure. I should. Off the top of my head,a lot of these attacks aren't attacksagainst the user. They are more attacksagainst the code. You need to think aboutwhat are attacks against the users in thesesocial systems. If we have those,how much is awareness going to bea defense? So maybe an example mightbe nutritional labels. If high fructose syrupis a hack against our biological need for quickenergy and our nutritional labels are some kind ofliteracy or education solution. That's where I'd look. My guess isit's part of it. I tend not to be a big fanin our field of education as a solution. I mean, I want our systemsto work even with an uneducated user. And I think this is justsophistication of our field. The early automobileswere sold with a toolkit and a repair manual. Now they're not. Now everybody can drive. You don't need to bean expert in internal combustion to drive a car. And you shouldn't need tobe an expert in anything to use a computer. I want the fixes moreembedded in the system than to rely on the user. But that's worth thinkingabout in these broader systems because they areso much more user-focused than a tech system. Thank you for that. Something to think about. Yes. >> AUDIENCE: Some veryinteresting ideas. Security, tech securityfolks are very good at spotting problems, very goodat coming up with solutions. But what you haven'ttalked about is how are we going to get that bell,that terrific bell on that cat because we're not thepeople, usually, who have the power or theinfluence to implement. >> BRUCE SCHNEIER: What Iwant is more techies in the room. I mean, this is really what Ipush in public interest tech. Right now, we don't. We are not involved in theconversations, and I think we can contribute tothese conversations. Last year, we hada sitting U.S. Senator in a public hearing askingMark Zuckerberg this question: How does Facebook make money? Right? On the one hand, mygod, you don't know? And two, no one on yourstaff told you that was a stupid question? The bar isreally low here. And we need to do better. How? I don't havea good answer. I think we're tryinga lot of things. But, yes, I think thatis a big part of the solution, gettingtechnologists involved in public policies becauseall of these problems have some tech component. That's not a great answer. It's what I got for you. Let me go tothe next person. >> LOGAN: Hi.My name is Logan. I am a researcher in bothgovernment and computer science atGeorgetown University. I really likethis paradigm. I think it's veryfascinating; however, just off your talk, it seemslike it's focused on ironing out the kinks andthe bugs in systems. But when you look at howentrenched some of these broader sociopoliticalsystems are, some of them may be flawed to the core. Are you worried that thisparadigm may focus more on just ironing out the bumpswhen some of the systems may need to bereplaced entirely? >> BRUCE SCHNEIER: Yeah.That's a good comment. And you're right. In computer security,we tend to iron out the bumps. That's what we do. Rarely do we say that theinternet is fundamentally broken; make a new one. If we say that, peoplelook at you and say are you an idiot? That's nevergoing to happen. So I think this kind ofthinking is about the bumps. You're right. We're not going to fix thebroad structural issues with this kindof thinking. That takes sort of anotherlevel of abstraction. Am I worried that thiswill obscure that? I am not. I think both are a thingand we have to deal with them. Society is terrible at makingbroad structural changes. I don't thinkI can fix that. But in the absence ofthat, I think starting to think about what power isdoing as hacking and how they're exploiting asvulnerabilities, I think that would go some way tochanging the way we think of the dynamic. Hopefully that will help. But you make avery good point. >> LOGAN: Thank you. >> BRUCE SCHNEIER: All right.You're my last question. >> TOM SEGO: I'm TomSego, CEO of BlastWave. My question is reallyaround incentives and purpose. I kind of seetwo broad groups. One in which one groupis trying to gain and leverage power andmaximize what they can do with that power, and thenthe other group is trying to immunize the systemfrom these hacks. They're trying tomake it invulnerable. And I'm curious, like, howdo you deal with those different types ofopposing purposes? There's no singlerequirements document that we've all agreed upon. >> BRUCE SCHNEIER: Right. And I think that's whatI talk about in that the systems evolve. It's not like we havea spec we can look at. Although there arevulnerabilities in specs, too. I don't know if Ihave a good answer. That's a good question. And I think this mightspeak to the edges of where my generalizationstarts failing. What do we do when thereisn't a consensus on what the system issupposed to do? I think I got to that whenI talked about VC funding. Is that a hack or a not? From thisperspective, it is. From that perspective,it's just that's the way the system works. Are lobbyists a hack? Yeah, kind of. But, no, that's howwe get laws passed. I don't know. As I flesh this out, I'mgoing to have to be a little more rigid, but Ithink there's value in having a squishydefinition. You can claim legitimatelythat gay marriage is a hack. It's taking thisparticular system and we're going to useit in this new way. A lot of us thinkthat's a great idea. But, you know,it was a hack. So is that good or bad? Well, you know,there are good hacks. The question is now, whatis it supposed to do? What are its goals? Whose society? That's where thatgets embedded. So I don't havea good answer. But that's agood question. >> TOM SEGO:Okay, thank you. >> BRUCE SCHNEIER: So Idon't -- I wasn't taking notes. Can you email methat question again? Just end me an email. Thank you. All right. I have to leave the stage. Thank you all. Any other questions,comments, suggestions for examples, things to lookat, places where I'm completely wrong, pleaseemail me because I want to keep thinking about this. Thanks, all. Have a great conference. I'll see you next year.