a management perspective on privileged access to computer systems![]()
by Jim Hickstein
Jim Hickstein was a programmer for many years, but got into UNIX
system administration "in self-defense" in 1990. He has been the
manager of a group of sysadmins almost as long. He is the Treasurer of
BayLISA.
The Question Users who need to do something on the computer requiring privileged access they don't have often pose the question, "Can I have the root password?" The first request is seldom granted without deeper probing; the systems staff asks, "Why? What for?" and the user has one of several typical responses. The commonest, and easiest to handle, is "I need it to do A, B, and C." There are usually ways to accomplish whatever the user must do short of granting unlimited privileged access, and the systems staff can direct the user to those tools or methods, up to and including doing something manually on the user's behalf. But sometimes the true reason is, "I need it to protect my position" or "I need it to enhance my self-esteem." Getting to the truth of such reasons can be difficult and the source of much of the conflict that can arise. In these cases, it is vital that management get involved to resolve the issue without compromising the organization's goals. Managers must understand the principles involved to make the right decision. The Stability Argument Computer systems are very complex, the more so as they become more flexible and powerful, and therefore more valuable to their owners. Even a single-user personal computer is complex enough to demand a great deal of its user's time and effort. Consider the typical workplace computing environment, which is a network of dozens, sometimes thousands, of computers, all interacting in various ways. There's a lot going on. Software developers, when they create these systems, have many choices to make about how the systems will operate. But to reach the largest market, they leave many of those choices up to the customer, calling their systems "flexible" and "policy-neutral." And indeed they are, but someone has to flex them, and someone has to set the policies. And someone has to tell the software what the policies are, to configure things so they interact correctly. Such configuration is invariably central to the correct operation of the system as a whole. There is a high risk of a mistake having widespread effects. System configuration is typically protected, requiring privileged access to change things, so (the developers imagine) only those who know what they are doing will change anything. The system will run smoothly, according to the developers' vision, and the users will be productive and happy. The stability argument says that most users cannot know all the details of how the system is configured and implemented, so they cannot always make informed choices about what to change. A little knowledge is a dangerous thing. Even a lot of knowledge may not be enough to avoid making a serious mistake. And granting unlimited privileged access gives the user the ability, if not the motive, to make such changes. Often, the mistakes happen far away from their effects. In one example, users were having trouble logging in "sometimes." Their accounts seemed to go in and out of existence in the space of hours. Some days the problem got worse, some days better. After almost a week, the senior sysadmin tracked the problem to a transient machine set up as an NIS server, brought to the headquarters site by visitors from a field office in Israel. Clients find an NIS server by broadcasting on the network, and sometimes they found this bogus one instead of the authorized ones run by the local staff. The NIS domain name was the same, a fact that caused no trouble back in the field office because broadcasts did not reach across the wide-area network. The mistake had its roots months in the past and a continent away. Privileged access to that one machine, and its later transport, was all that was needed. In another example, a remote office (this one in Paris) was configured to have an incorrect so-called default route, normally directing network packets toward the corporate headquarters and its Internet gateway. But this machine was also configured to run a dynamic routing process, which told all the other computers on the network this route, sucking the entire company's Internet-bound packet traffic into a black hole for about 24 hours. These are simple examples with fairly easy corrective actions (such as restricting where core routers get their default route). But more examples are easily found of "accidents" with profoundly obscure mechanisms and highly non-local effects. The more complex the system or network, the worse this gets, and the harder it is for even the specialists to avoid making disastrous mistakes. In these scenarios, not having root privileges means not having a sword hanging over one's head. Privilege carries responsibility: privilege one may not need, and responsibility one can probably do without. If there's a way to avoid it, take that way. The Productivity Argument Computers are supposed to be tools. Some people collect and enjoy tools for their own sake, but a carpenter carries a hammer to pound in nails and build something, not just because it looks good on the belt. There are two kinds of computer users: those who do computing for its own sake (system staff and programmers); and those who use the computers to accomplish something unrelated to the computers they use, something you are paying them to do. The more time they spend in overhead, getting the tools to work correctly, the less time they spend . . . not "doing work" exactly, but getting their work done. With the complexity of the tools comes specialization. Even in an office where "people administer their own desktops" (a codephrase for disaster in this context), an informal leader will often emerge, the power user to whom the others turn when they have a problem. (Who helped you when had trouble upgrading your laptop? That's the one.) After a while, that person will obtain the secret passwords to the file server or router or whatever, and, if you're lucky, will keep them secret. You now have a sysadmin. But what was that person's "real" job, again? It's getting worked on, to be sure, but is it getting done? Are you happy with that person's performance in that respect? If you're less lucky, the informal leader will recognize this trap and avoid helping the others (and taking the fall at review time), and the rest of them will founder. Where's the manual for A? How did you get B to print? C doesn't work at all, so I (faxed it / wrote it to a floppy / bought a box of pencils / lost the order). Specialization requires time and energy. Not everyone has, or should spend, the time to read all the manuals and figure out the systems and solve the system problems. But someone should. One person who is knowledgeable about the systems and can set them up so they run well provides leverage for the productivity of all the other people there. Even in extremely small organizations, where a whole person can't be justified, part of one should be officially recognized and assigned this role and the resources to carry it out. To let this function go unstaffed is to waste a great deal of the power in the systems, to waste much of the large investment in technology that most businesses now make. And, often much bigger, to waste expensive labor, misdirected into fooling with the computers while not getting the right value out of them. In larger shops, where several people are dedicated to systems support, the problem is slightly different: There get to be too many specialists, too many cooks spoiling the broth. In the small shop, the support person will often have unofficial deputies who have privileged access and make limited changes, but communication is easy: "While you were on vacation, I rebooted D twice, and had to remove lock file E." In larger shops, it becomes ever more important to limit the absolute number of people with privileged access. Even informal deputies (former sysadmins, senior systems developers) become first unnecessary, and then undesirable; they have to be deputized formally. Change-control procedures come to the fore, and documentation is vital. In these situations, merely knowing about the procedures and how to follow them constitutes special knowledge that very few part-time helpers can be expected to maintain. And the folklore that somehow never gets documented underscores this. The answer to The Question becomes "Thank you for your offer of help, but we've got things under control." It's often not entirely true. The Security Argument
Another seldom-admitted-to reason behind a request for more privileges
is, "Because But of course there are degrees. If one has privileged access, one can bypass user identification. That is, one can masquerade not only as the "super" user, but as any other authorized user. One could then do bad things that would be ascribed to the victim. People have lost their jobs, even their freedom, from such acts. Those with this access are perforce trusted not to abuse it this way. Security threats fall into three broad categories: access, disclosure, and modification. Unauthorized access is akin to breaking and entering (still a crime, even if the would-be burglar doesn't take anything). Denial-of-service attacks are in this group, since they deny access to authorized users. With disclosure, files (say) don't have to be destroyed to do the damage: Someone sends your email about that top-secret unannounced merger to the Wall Street Journal. Modification (especially if undetected) and its limiting case, destruction, are another group. How long would you stay in business if your employees could give themselves a raise by modifying the payroll database without authorization? Yet someone has to be able to modify it. That's the crux of security. Most computer systems feature some kind of user identification, at least as part of authorization. (What you can do depends on who you are.) Some don't even do that; personal-computer operating systems are only now starting to get these features. Yet, with the concept of the all-powerful privileged user who can bypass everything, user identification isn't perfectly reliable. It is meaningful only insofar as you control tightly who can bypass it. "My people administer their own desktops, and everyone knows the root password to the fileserver. As long as we all know this, and agree that there is no security, what's the problem?" The problem here is that you have discarded this feature of the operating system, and these hosts cannot be trusted with certain information as a result -- information that people need to store and use in a trusted environment. Do you want to conduct all your secret communications about that merger only in person? What would the airfare be like for those last-minute meetings? Okay, you need to trust the telephone. What about the fax machine? The network printer? The computer on your desk? Can you? Are you sure? What if you turn out to be uninformed about an important risk? What if you learn this from the Wall Street Journal? The sysadmins aren't supposed to be aware of the merger yet, but they can read your email (if they have the time and inclination to look, which is unlikely). But if there are only three of them, for sure, and the fact leaks from your office, the investigators will thank you for keeping the number small. Interviewing 40 people in this situation, six of whom aren't even employees, is a lot harder, they will tell you. You can't drive this number to zero, with the technology we've got today, but you need to keep it small. The Economic Argument In fact, all of the previous arguments boil down to this one. Stability and reliability contribute to productivity and therefore profits. Security is just the limiting of potential losses, tangible and intangible, but virtually all financial at bottom, at least in a commercial enterprise. This is the weakest argument against a user demanding privileged access. But it underlies all the policies, all the procedures. Management should bear it in mind but not beat people up with it. Deputies Having said all that, there are some cases where extending privileged access to a few people outside the systems group will actually further all these goals. Former sysadmins, system programmers, and very knowledgeable users will often request such access when they find that their own productivity is impaired by waiting for systems staff (often overworked) to do apparently simple tasks as root. But the manager must decide if total productivity, stability, or security will be improved (or at least not damaged), rather than just the life of one user. It's seldom an easy call. Sysadmin groups usually develop some kind of processes that help them act as a team and avoid mistakes. The simplest is change control -- to be able to roll back erroneous edits to system files and to track who did what. Further development brings formal communication and work tracking (mailing lists, ticket systems), and later, formal proj-ect planning, scheduling, and review; still later, metrics of the work processes themselves. When these processes exist, helpers must be deputized by being brought into the loop and made to follow the standard procedures. Some effort is involved in training them for this, which can be the source of sysadmin resistance to deputizing someone. Sometimes, the request to become an official deputy sysadmin only serves to point out a glaring lack of such processes. Managers should recognize this situation, take steps to create (or just write down) the processes, and then to see that the prospective deputy is properly trained. If helpers are not formally deputized, but just given the passwords without a second thought, they may end up being a source of grief for the sysadmins and the other users, and access may have to be revoked: clearly a failure of management. But if this is done right, a good deputy will be recognized as an ally to the sysadmins, and a resource to all the users.
|
![]() First posted: 14 Apr. 1999 jr Last changed: 14 Apr. 1999 jr |
|