USENIX Multimedia
Audio and video of USENIX conference presentations are freely available to everyone, in MP3 format for audio and MP4 format for video.
Browse media by year:
Or listen to select content on the USENIX podcast!
2008 [Return to Top]
USENIX SECURITY '08
Keynote Address:
Dr. Strangevote or: How I Learned to Stop Worrying and Love the Paper Ballot
Political DDoS: Estonia and Beyond
In the spring of 2007, the country of Estonia suffered a deluge of distributed denial of service (DDoS) attacks, coordinated to coincide with street-level protests. These attacks caused nationwide problems for the heavily wired country of Estonia and did so again when they recurred in early 2008. These attacks were not the first such politically motivated attacks and they will certainly not be the last. This talk explores the world of DDoS attacks and their growing role as an online political weapon. It also covers how Arbor Networks measured the Estonia attacks, how other attacks are measured, and what these attacks mean for the Internet at large.
Building the Successful Security Software Company
Ted will discuss the security market, past and present. He will review what it takes to succeed in building a company and will look at current opportunities. Ted will also share with the audience a few of his successes.
In a field with few design principles ("defense in depth"? separate duties?), few rules of thumb, no laws named after people more influential than Murphy, no Plancks or Avogadros to hold Constant, and little quantification of any sort (we count only bad things), it appears the best we can do right now is to tell stories.
Over (enough) beer we conjure up lightly anonymized war stories about late-night phone calls, scary devices, hard-to-find bugs that exploiters somehow found, the backups that didn't, stupid criminals, craven prosecutors, cute hacks ("but don't try this at home"), and pointy-haired bosses. . . . There will be a few of these in this talk, but also some cautionary tales and parables—isomorphs of the Old Stories demonstrating human frailty and that the Law of Unexpected Consequences operates most strongly near the intersection of Bleeding Edge and Slippery Slope. Also, just a bit about the future.
Security Analysis of Network Protocols
Network security protocols, such as key-exchange and key-management protocols, are notoriously difficult to design and debug. Anomalies and shortcomings have been discovered in standards and proposed standards for a wide range of protocols, including public-key and Diffie-Hellman–based variants of Kerberos, SSL/TLS, and the 802.11i (Wi-Fi2) wireless authentication protocols. Although many of these protocols may seem relatively simple, security protocols must achieve their goals when an arbitrary number of sessions are executed concurrently, and an attacker may use information provided by one session to compromise the security of another.
Since security protocols form the cornerstone of modern secure networked systems, it is important to develop informative, accurate, and deployable methods for finding errors and proving that protocols meet their security requirements. This talk will summarize two methods and discuss some of the case studies carried out over the past several years. One method is a relatively simple automated finite-state approach that has been used by our research group, others, and several years of students in a project course at Stanford to find flaws and develop improvements in a wide range of protocols and security mechanisms. The second method, Protocol Composition Logic (PCL), is a way of thinking about protocols that is designed to make it possible to prove security properties of large practical protocols. The two methods are complemen- tary, since the first method can find errors, but only the second is suitable for proving their absence. The talk will focus on basic principles and examples from the IEEE and IETF standardization process.
Enterprise Security in the Brave New (Virtual) World
The move to virtual machine–based computing platforms is perhaps the most significant change in how enterprise computing systems have been built in the past decade. The benefits of moving to virtual infrastructure are substantial, from ease of management and better server utilization to transparently providing a wide range of services from high availability to backup. Despite this sweeping change, the way that we secure these systems is still largely unchanged from how we secure today's physical systems. We must rethink the way we design security in virtual infrastructure, both to cope with the new challenges it introduces and to take advantage of the opportunities it offers.
I will discuss the growing pains of moving from physical to virtual infrastructure in the network and the dissonance this can cause in operational settings: why simply dropping existing firewalls and NIDS into virtual infrastructure can limit flexibility, how new mechanisms can help overcome these limitations, and why these elements are better off being virtual instead of physical. Next, I will look at how virtual machines can affect host security as techniques such as virtual machine introspection become mainstream and the line between host and network security gets increasingly blurred. Finally, I will look at some of the odder and more interesting capabilities virtual platforms will be offering in the next few years which will offer fertile ground for new research.
Security processes inside most commercial development teams haven't caught up with the growing threat from organized crime groups that are becoming better financed, are relying more on automation to find vulnerabilities, and have figured out how to drive down the cost of launching a significant attack. This talk looks at why the incentive to attack and the ability to find flaws are outpacing practiced application security techniques. It examines how the economics of software attack and defense ("hackernomics") is changing and looks at some interesting outcomes, such as making vulnerability discovery a viable business. The talk will include several live vulnerability demonstrations to illustrate the exploitation vs. prevention dynamics.
A Couple Billion Lines of Code Later: Static Checking in the Real World
This talk describes lessons learned taking an academic tool that "worked fine" in the lab and using it to check billions of lines of code across several hundred companies. Some ubiquitous themes: reality is weird; what one thinks will matter often doesn't; what one doesn't even think to reject as a possibility is often a first-order effect.
Panel:
Setting DNS's Hair on Fire
The Ghost in the Browser and Other Frightening Stories About Web Malware
While the Web provides information and services that enrich our lives in many ways, it has also become the primary vehicle for delivering malware. Once infected with Web-based malware, an unsuspecting user's machine is converted into a productive member of the Internet underground. This talk explores Web-based malware and the infrastructure supporting it, covering an analysis period of almost two years. It describes trends observed in Web server compromises, as well as giving an overview of the life cycle of Web-based malware. The talk shows that Web malware enables a large number of questionable activities, ranging from the exfiltration of sensitive information such as email addresses and credit card information to forming spamming botnets, which are responsible for a significant fraction of the spam currently seen on the Internet.
Managing Insecurity: Practitioner Reflections on Social Costs of Security
Nonprofits and local government have experienced more than their share of breaches and notifications over the past several years. The reasons for this are evident: lots of sensitive information, insufficient IT resources, lack of institutional discipline, etc. Clearly more time and resources at these organizations should be dedicated to security.
I discuss whether even identifying the proper balance is a good deal more difficult for public service organizations than has been widely discussed. Will security concerns affect the adoption of electronic medical records, regional health organizations, and nonprofit work? At what point do needed changes in organizational cultures undermine the public mission? What types of security controls and practices are best suited for service agencies? What kinds of research would most help public services?
Work-in-Progress Reports (WiPs)
The Work-in-Progress reports (WiPs) session offers short presentations about research in progress, new results, or timely topics.
NSDI '08
Xen and the Art of Virtualization Revisited
This is a talk in three parts. I'll give a summary of the Xen story so far, looking at how Xen made the transition from research project to enterprise software and the many challenges along the way. Next, I'll look at why virtualization is such a hot topic in IT and the failings of common operating systems that have led to this. I'll then look at how Xen has evolved since the 2004 SOSP paper, seeing how paravirtualization and software/hardware co-design have helped reduce the overhead of virtualization. In particular, I will look at network interfaces to see how what was once a high-overhead device to virtualize has been tamed.
FAST '08
"It's like a fire. You just have to move on": Rethinking Personal Digital Archiving
Many consumers engage in magical thinking when it comes to the long-term fate of their digital stuff. A strategy that hinges on benign neglect coupled with lots of copies seems to be the best we can hope for. Yet if we take a fresh look at what real people do, it becomes possible to reframe personal digital archiving as more than a battle with burgeoning file formats and media obsolescence, and a push toward trusted repositories—"storage in the cloud." I will discuss four pervasive themes of personal digital archiving that have emerged from recent studies and try my best to convince you that this is a problem whose time has come.
Sustainable Information Technology Ecosystem
The next generation of information technology services will be driven by an ecosystem made up of billions of service-oriented handheld devices and thousands of data centers. The IT ecosystem must address the fundamental needs of society while reducing the destruction of available energy when compared to conventional ways of conducting business. This applies in particular to IT services in growth economies where users are eager to use IT to improve the quality of life. To enable "IT as a weapon" for the masses while producing a net-positive impact on the environment, we need to devise a least-material and least-energy approach to IT solutions.
We propose an approach that traces the lifecycle of IT solutions based on the second law of thermodynamics. This "cradle-to-cradle" method calculates the cost in Joules of available energy destroyed to provide a uniform framework to compare the sustainability of IT solutions with respect to conventional approaches. We will probe the design of computer and storage hardware and services in view of inflections in the technologies and their impact from a thermo-mechanical point of view. We will call for a multidisciplinary community to develop a sustainable global IT ecosystem by fusing the least-materials and least-energy approaches.
2007 [Return to Top]
LISA '07
CERN's Large Hadron Collider turns on next year, providing high-energy particle collisions for four experiments that, between them, are expected to generate up to 15PB of data per year. After giving a brief introduction to the accelerator and experiments, this talk will outline the associated computing challengesin particular, cluster management, data storage and distribution, and grid computingand describe how CERN and the worldwide particle physics community have been preparing to meet them.
The Biggest Game of Clue® You Have Ever Played
It's 3:30 in the morning and your pager is going off. There's a new mystery to be solved and you're the one who's been picked to solve it. That mystery may be a server down or a lost hiker. While the problem spaces are different, the problem-solving techniques are similar. This talk will look at the methodology used in lost person search management: preplanning, event notification and mobilization, team dynamics, objectives, strategy, tactics, investigation, statistical analysis, paperwork, and demobilization. These are all puzzle pieces regardless of the problem space. Can you figure it out?
Prince Caspian on Location: Commodity Hardware and Gaffer Tape
The as yet unreleased Walden/Disney production Prince Caspian was shot on location throughout Europe and New Zealand. While you might expect that a big-budget Hollywood production replete with thousands of SFX shots would have a tightly organized, well-financed, top-notch IT department, the truth might surprise you. Trey Darley saw it all first-hand and will talk about how the Prince Caspian IT department pulled it off using mainly commodity hardware, their wits, and lots of gaffer tape.
Deploying Nagios in a Large Enterprise Environment
This talk will cover scalability issues, security issues, our design and how it has evolved, user acceptance issues, integrating monitoring of proprietary applications, monitoring "closed" devices, high availability/disaster recovery, and lessons learned.
Who's the Boss? Autonomics and New-Fangled Security Gizmos with Minds of Their Own
How do humans stay in the loop when autonomics seems to be pushing them out? What do you do with a system designed to have a mind of its own? Who's responsible when it makes agreements with other systems that may cost your company money? This talk will incorporate the results of interviews with sysadmins working with autonomic systems. I'll share their reflections and my own on the potential impacts of future autonomic systems.
Yes, disk is marvelous, getting inexorably cheaper and bigger. But here's the dark side: How do you attach, configure, and mount tens of TB on a PC? How do you manage the files and back up that data? Worst of all, vast amounts of cheap disk allow users to dream of projects requiring petabytes of disk and ask you to make it happen. This talk will identify most of the serious issues and will describe solutions.
Experiences with Scalable Network Operations at Akamai
Akamai's platform for content delivery and application acceleration consists of over 20,000 servers in over 2,800 locations in 72 countries and over 1000 networks. Providing high levels of performance and reliability without requiring a large network operations team necessitates significant automation. Further challenges are introduced by the highly distributed nature of the Akamai system. We'll discuss some methodologies and systems we have developed for operating the Akamai network.
Using Throttling and Traffic Shaping to Combat Botnet Spam
In this talk, Ken Simpson describes his research into spammer behavior and explains how spammers' impatience can be used for spam suppression. During this talk, you will learn about spammer economics and spammer behavior, get an introduction to connection management, and hear how we have used connection management in some real-world scenarios to reduce spam traffic.
Ganeti: An Open Source Multi-Node HA Cluster Based on Xen
Ganeti is a cluster management tool we built at Google that leverages the power of Xen and other open source software in order to provide a seamless environment in which to manage highly available virtual instances. The talk will focus on what Ganeti provides, what audience it is targeted to, and what the plans for its future are.
Using Throttling and Traffic Shaping to Combat Botnet Spam
In this talk, Ken Simpson describes his research into spammer behavior and explains how spammers' impatience can be used for spam suppression. During this talk, you will learn about spammer economics and spammer behavior, get an introduction to connection management, and hear how we have used connection management in some real-world scenarios to reduce spam traffic.
Homeless Vikings: BGP Prefix Hijacking and the Spam Wars
BGP prefix hijacks take the IP addresses of others and make them your own. This talk provides a chilling account of the current use of prefix hijacks by spammers in a successful effort to defeat RBLs. Placed within the context of the history of the spam wars, this talk makes clear the grim future we face if we continue to escalate the spam wars into the network layer; namely, a future where every spammer on earth can arbitrarily choose and make routable an unallocated IPv4 address (one that the RBLs have never seen) once per day for the next few hundred years or so without ever using the same address twice or ever colliding with any other spammer.
Beyond NAC: What's Your Next Step?
Now that you have adopted Network Access Control, what is your next step? With the NAC market maturing at a rapid rate, most companies have either already implemented NAC or are evaluating it for future deployment. However, there is still much confusion about what is and what isn't NAC. This presentation will clearly outline how NAC is an important security enhancement, and why it is not an end-all security solution. Attendees of this presentation will learn the technology that is required in today's world to ensure network security and effectively mitigate threats.
The Economic Meltdown of Moore's Law and the Green Data Center
The net economic productivity of IT is threatened because server power consumption improvement is occurring at a slower rate than the increase in computer performance. As a result, the enterprise TCO per unit of computing has not been falling nearly as rapidly as senior executives might think. The one-time benefit of killing dead servers and virtualization will defer this new economic reality, but CFOs, CTOs, and CIOs need to change their economic decision models now or risk investing in new applications that can't pay back their real costs.
Hardening Your Systems Against Litigation
Recent amendments to the Federal Rules of Civil Procedure require parties in litigation to make electronically stored information available to the opposing side. Unfortunately, legal and IT departments still don't communicate well with one another. The presentation will include an overview of the parts of the Federal Rules that are relevant to IT professionals and how IT staff should approach their legal department. Some examples of how not to handle a litigation hold will be given, as well as how to prepare one's systems for potential or pending litigation.
Data centers can't be built fast enough to take care of the increases in power consumption and lack of available floor space for numerous companies. Companies are concerned over new environmental legislation being considered and how it will impact their business. If you aren't seeing these issues in your data centers now, you could in the next five years. Hear about what we at Sun have done in our own data centers and how we are trying to approach the problems with a fresh new perspective.
The butterfly effect is traditionally described as the almost imperceptible flap of a butterfly's wings causing changes that result in a tornado being formed (or not!). In information security, a change that seems simple may result in serious vulnerabilitiesand as the complexity and interdependence of the environments we manage increase, predicting the effects of apparently innocent actions will become infinitely more challenging. This talk will discuss some notable examples of the butterfly effect in security, as well as giving a brief overview of future hot points to look toward.
There's a field in which people routinely:
- Work well under pressureimprovising and showing great creativity even in the worst of situations
- Create (repeatable!) multi-step procedures that integrate different components into cohesive systems
- Document these procedures so that even total neophytes can understand them
- Train other people to do the same
Nope, not system administration. Sysadmins only wish we could consistently do these things.
All of this stuff is taken for granted in the world of cooking. How the heck do they do it?
David and his lovely assistant Lee Damon will not only tell you how they do it, but will also show you some of how it is done. We'll take a highly entertaining romp through the cooking world to find the tools, techniques, and processes that can be applied to system administration. You'll never look at your food or your field in the same way again.
Keynote Address:
Autonomic Administration: HAL 9000 Meets Gene Roddenberry
How will we enhance network management so that the promise of future technologies and services can be realized? This talk will first provide an introduction to the problems that make network management difficult from the point of view of the practitioner. Then it will examine some exciting new technologies that, when combined, offer a holistic solution that could be used for system administration as well. The talk will conclude with examples from autonomic networking research being done in Motorola Labs that can be used in network and system administration.
Scaling Production Repairs and QA Operations in a Live Environment
Google has seen explosive growth over the years, and this has evidenced itself in the increase in size of the production fleet. As the fleet increases, so do the number of machines both being released and repaired. This talk will cover how, operationally and in many different locations, the methods in which data center work, and the systems that support it, were developed.
A Service-Oriented Data Grid: Beyond Storage Virtualization
The storage industry talks about "virtualization" in static and device-specific contexts, while enterprise IT organizations are under pressure to deliver a range of data "services" to their customers, with a tiered pricing model and verifiable service levels. These disparate producer- and consumer-oriented views of storage leave an implementation gap that must be filled in order to realize the "virtual everything" vision of enterprise grid computing. We will identify key storage and data management trends that are evolving to deliver this service-oriented view of data.
USENIX Security '07
The Human Factor in Online Fraud
In this talk, we discuss what impact deceit and misuse have on online security, drawing on examples from phishing, click-fraud, and general privacy intrusions. We believe that a methodology founded on an improved understanding of human behavior—in particular, in the context of deceit—may help anticipate trends and steer the development of structures and heuristics to curb online fraud. Guided by behavioral aspects of security, we consider technical measures to preemptively counter some of the threats we describe. An extended abstract is available at www.human-factor.org.
How to Obtain and Assert Composable Security
This talk motivates and presents the paradigm of Universally Composable security. It then briefly reviews some of the recent research done within this paradigm and on it. Part of this research touches foundational aspects in security and cryptography. Other parts have immediate practical implications.
This talk (based on a book of the same title co-authored by Greg Hoglund) frankly describes controversial security issues surrounding MMORPGs such as World of Warcraft. This no-holds-barred approach is fully loaded with code examples, debuggers, bots, and hacks, of interest whether you are a gamer, a game developer, a software security person, or an interested bystander.
Computer Security in a Large Enterprise
Computer security is one of the most complex challenges facing large enterprises today. Securing a multinational enterprise is a balancing act based on solid risk management and technical solutions in a multifaceted, changing environment. Managing risks without securing the enterprise is meaningless, but is there a one-size-fits-all solution or special technology to secure the organization? Will this solution or technology be cost-effective? What about the intersection between IT security, physical security, and information security? Ultimately, tackling computer security within a large enterprise is more than a technical problem; it must be based on people, process, and technology in order to properly manage risks associated with threats.
The first real viruses for mobile phones were found in June 2004. Since then, scores of different viruses have been found, most of them targeting smartphones running different versions of the Symbian operating system. Many of them are spreading in the wild and have been reported from all continents. These mobile viruses use new spreading vectors such as multimedia messages and Bluetooth and pose special problems for researchers. For example, they can easily escape during analysis as they use radio connections to spread. As total count of known mobile malware is now around 350, we know much more about what types of viruses to expect in the future and about who writes them. We also know what we should do to prevent this niche area from becoming a bigger problem.
It is now quite clear that most electronic voting systems were designed with only minor concern and rudimentary knowledge of computer security. Over the past five years, people with more in-depth knowledge of computer security have helped tremendously in appraising the security of current systems and, to a lesser extent, in improving the security of voting systems. This talk will highlight the ways a computer security perspective might be able to contribute to more trustworthy voting systems, as well as some of the ways that voting is different from other computer security problems.
Report of the California Voting Systems Review
Earlier this year, California Secretary of State Debra Bowen commissioned the University of California to examine 3 voting systems. They found significant security problems in all 3 systems.
Rootkits are backdoor programs that can be placed in a computer without detection. Virus scanners and desktop firewalls are woefully inadequate to stop a rootkit attack, which can go undetected for years. This talk will explain how rootkits are built for Microsoft Windows XP. It will cover detailed technical aspects of rootkit development, such as compilation, loading and unloading, function hooking, paged and nonpaged memory, interrupts, and inline code injections. You'll also learn the technical aspects of the hardware environment, such as interrupt handling, memory paging, and virtual memory address translation. The talk will also cover how to detect rootkits, including runtime integrity checks and detecting hooks of all kinds, such as IRP hooks, SSDT hooks, and IDT hooks.
Covering Computer Security in The New York Times
The MSM gets it wrong, the conventional wisdom goes, because the reporters aren't technically adept but are looking for scare stories to sell newspapers or get ratings. John Schwartz debunks a few myths about the mainstream media and explains that it is possible to write about security and other topics without hype and still keep your job.
USENIX Annual Tech '07
The Impact of Virtualization on Computing Systems
This talk describes how virtualization is changing the way computing is done in the industry today and how it is causing users to rethink how they view hardware, operating systems, and application programs. The talk will describe this new view of computing and the benefits driving users to adopt it. The roles of hardware and operating systems will be discussed, along with what changes will be needed to support this new computing model efficiently and simply.
Life Is Not a State-Machine: The Long Road from Research to Production
Traditionally a technology adoption cycle progresses at least 10–15 years before technologies become mature enough for widespread adoption. That time period is dramatically shortened as there is a need for technologies that can satisfy the unlimited appetite for ultra-scalable, highly reliable, high-performance, and cost-efficient software architectures by the top Internet companies. In reality, however, it turns out to be very difficult to speed up the adoption process. In this presentation I will review some of the obstacles that stand in the way of adoption of research results into production environments and will revisit the principles of "worse is better" and Occam's razor in the context of technology transition.
This talk (based on a book of the same title co-authored by Greg Hoglund) frankly describes controversial security issues surrounding MMORPGs such as World of Warcraft. This no-holds-barred approach is fully loaded with code examples, debuggers, bots, and hacks, of interest whether you are a gamer, a game developer, a software security person, or an interested bystander.
Rob Lanphier, Linden Lab's Open Source Busybody, and Mark Lentczner, who directs a software engineering studio at Linden Lab, will talk about the release of the Second Life viewer source code: what that means, what it might mean, and what it doesn't mean. He'll provide a brief overview of the technology and history of Second Life, discuss the astronomical growth in use of Second Life, and explain what Linden Lab is doing to cope with the ever-increasing stress on the system. He'll discuss some key improvements Linden Lab is making in the protocols used by the product—utilizing a Web services model to increase scalability and to decouple versioning between clients and servers, as well as server-to-server communication.
High-performance technical computing stresses computer systems in many ways, from CPU performance to memory systems to inter-system communication. Over the past twelve years, clusters of commodity hardware running Linux have become the most common tool for high-performance computing. However, the dynamics of such applications are often very different from those of applications that drive the design of commodity computer systems, which means that commodity systems may be cheap for computing, but they are not efficient for many technical applications.
This talk will feature a live—but entirely self-contained, and therefore safe!—demonstration of a modern malware attack in action. Gain insight into how the bad guys think and operate, and you learn how better to defend yourself against them. The talk will also examine some of the tricks and techniques that can be used in a malware research lab to get even an apparently complex and heavily obfuscated piece of malware to reveal its secrets in safety.
LiveJournal's Backend Technologies
Hear the history and lessons learned while scaling a community site (LiveJournal.com) from a single server with a dozen friends to hundreds of machines and 10M+ users: what's worked, what hasn't, and all the things we've had to build ourselves that are now in common use thoughout the Web 2.0 world, including memcached, MogileFS, Perlbal, and our job dispatch systems.
MapReduce and Other Building Blocks for Large-Scale Distributed Systems at Google
MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a Map function which processes a key/value pair to generate a set of intermediate key/value pairs, and a Reduce function which merges all intermediate values associated with the same intermediate key. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The MapReduce run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Thousands of MapReduce programs have been implemented, and several thousand MapReduce jobs are executed on Google's clusters every day. In this talk I'll describe the design and implementation of MapReduce and other building blocks for large-scale distributed systems at Google.
Perfect Data in an Imperfect World
It is no secret that we are at the dawn of the digital age—our parents (and, for some of us, even our grandparents) have computers, digital cameras, MP3 players, etc. We each have more computing power in our cell phones than the mainframes of 35 years ago had, and everywhere we find data acquisition and tracking systems. Privacy has never been more zealously guarded or more freely abandoned, and with the proliferation of digital data collection and dissemination have come new worries.
Tasks such as image recognition are trivial for humans, but they continue to challenge even the most sophisticated computer programs. This talk introduces a paradigm for utilizing human processing power to solve problems that computers cannot yet solve. Traditional approaches to solving such problems focus on improving software. I advocate a novel approach: constructively channel human brainpower using computer games. For example, the ESP Game, described in this talk, is an enjoyable online game—many people play over 40 hours a week—and when people play, they help label images on the Web with descriptive keywords. These keywords can be used to significantly improve the accuracy of image search. People play the game not because they want to help, but because they enjoy it. The ESP Game has been licensed by a major Internet company and will soon become the basis of their image search engine.
The computing systems that are powering many of today's large-scale Internet services look less like refrigerators and more like warehouses. Designing efficient warehouse-scale computers requires many of the traditional tools and methods developed by computer architects, and some new tricks as well. In this talk I'll describe some of the defining characteristics of these systems, with a focus on failure handling and power management.
Crossing the Digital Divide: The Latest Efforts from One Laptop per Child
This effort emerged as a way to capture the endless momentum of Moore's Law and create a laptop for those far on the other side of the digital divide—the poor children of the world and their families. In fact, the vast majority of the world lives without so many of the things we consider essential, not least of which is access to education and information. This year, we intend to launch with millions of laptops simultaneously in Rwanda, Pakistan, Brazil, Argentina, Uruguay, Libya, Nigeria, and Thailand. The children themselves will own these laptops, which will be distributed to them by the Ministries of Education. They should last for five years and are cheaper than five years' worth of textbooks in the average developing country.
NSDI '07
While running an election sounds simple, it is in fact extremely challenging. Not only are there millions of voters to be authenticated and millions of votes to be carefully collected, counted, and stored, there are now millions of "voting machines" containing millions of lines of code to be evaluated for security vulnerabilities. Moreover, voting systems have a unique requirement: the voter must not be given a "receipt" that would allow them to prove how they voted to someone else—otherwise the voter could be coerced or bribed into voting a certain way. This lack of receipts makes the security of voting system much more challenging than, say, the security of banking systems (where receipts are the norm).
FAST '07
A System's Hackers Crash Course: Techniques That Find Lots of Bugs in Real (Storage) System Code
This talk describes several effective bug-finding tools we have developed, which exploit not-widely-understood techniques—implementation-level model checking and symbolic execution—focusing on the key intuitions and ideas behind them.
Trends in Managing Data at the Petabyte Scale
The explosive growth in stored data has made petabyte-scale storage infrastructures increasingly common. The scale, growth rate, and increases in regulations related to data storage have imposed a number of non-obvious burdens on data ownership. These trends are driving the need to reorganize the traditional application-centric storage architectures toward a more unified storage infrastructure with new data management paradigms. This reorganization will likely drive a vibrant storage market over the next ten years.