>> What is a "bot" program, as opposed to other kinds of computer
> It is a slang term for a program that runs in the background and take
> "orders" from somewhere else on the Internet. Short for robot. If you
> have one, someone else controls your computer. Typically they let you
> think you still have control and just use it to do their dirty work
> without (they hope) you knowing what's going on. Like someone taking
> your car each night from 1 am to 4 am to delivery drugs.
Thanks for your explanation. It mades things clearer. I hope you
won't mind a few more questions.
What is the "background" on your computer? (In S/360-DOS days, we had
a background and foreground partitions.) Why can't we, as the owner
of the PC, control what is and what is not run in the "background"?
I suspect the answer to my question is that PCs today are highly
automated which allows for much of this junk to happen in the first
place. In its simplest state, a computer would require someone to
physically load and then execute each and every program desired.
Modern machines are automatic. That is, if you're browsing a website
that sends you a .PDF file, your browser program automatically brings
up the Adobe program to read it. I presume there's lot of other stuff
we lay people don't even know about going on, and the hackers take
advantage of that underworld.
I heard that M/S's new "Vista" will be _less_ automatic as a safety
measure. I sure hope so.
>> What allows and causes a foreign unauthorized program to start
>> execution on a computer where it doesn't belong?
> Three main ways.
> 1. You are on the Internet without a router or with one but not behind
> a NAT setup which means you are exposed to the outside world.
Could you explain what "NAT" is and does?
> There are large number of computers probing EVERY address possible
> on the Internet to see if you respond.
Why is this allowed to even happen? This is one of my big complaints
about the Intenet as it's presently set up: It's designed to be so
"open" that anyone can do anything. The computer dreamers and
idealists want it this way. This was fine in a narrow world of the
very early days, but not fine in an anonymous world of today. (Other
explanations would be appreciated).
> In a perfect world your computer would ignore these probes. But due
> to bugs in the various operating systems it is possible to find a
> bug that allows data sent in the probe to overwrite part of the OS
> and when that section of the OS is used the injected code takes
I don't understand why bugs would allow this to happen. To "answer
the door" means (1) the computer program has to know when the doorbell
is rung and (2) then execute a routine to answer the doorbell, and (3)
respond to the doorbell request. In other words, there is software
intentionally written and included to respond to outside probes.
Since probes are dangerous, why do we allow this? Why don't we
disable the entire "door bell" process?
Again, I suspect the answer is this process makes for easy automation,
but maybe you or others could explain it better.
> sets things up to run at startup,
I know computers have a start up routine, I have changed mine for DOS
purposes. But why should the start up routines be allowed to be
modified automatically? Is it that hard to require the human to
modify the routine himself (or authorize said modification)?
> And does all of this in a way such that you don't notice it
Maybe we need operating systems that make it impossible for the human
not to notice things are happening? Or would that create a flurry of
warning messages? (I must admit I turned off my browser's warnings
about confidential start and confidential stop of data. This comes up
when I log on or enter an order on-line.)
> 2. You visit a web site or read an email that does basically the same
> as #1 but is based on bugs in your Internet browsing software. The web
> site (or AD on the web site) or email contains HTML code that exploits
> a bug and allows code to be inserted into your system.
That really bugs me. As far as I know, Internet browsing software
should be READ ONLY with restrictions. It should be extremely limited
in what it allows an external site to do on my machine. I dislike the
idea of any site's -- even a 'trusted one' -- running their programs on my
machine. How do I know their programs are not buggy even from a
> 3. Social engineering is where a pop up or email says click here and
> you WIN, GET, etc ... a million, prize, etc... and what you are
> clicking is a program (often disguised as a graphic) which install a
> BOT on your computer.
Why do browser writers create this kind of capability?
> If you surf you may be exposed. The only way to stop this is to
> for a very restricted experience.
This is very frustrating. When I got my new machine at work I disabled
all that stuff. Then I found I couldn't browse anywhere since everyone
required it. Why, I don't know, it seemed sites were plenty able to
present information in an attractive way before those fancy features.
Further, my employer has me use sites that require fancy stuff. At
least my browser warned me clearly when I turned that on of the risks.
>> Lastly, why do such vulnerabilities exist in the first place? I keep
>> reading how the present Windows operating system is old; shouldn't all
>> the necessary fixes be developed by now?
> Modern OS's have 10s of millions of lines of code. People buy
> features. They don't buy future security problems. All those systems
> designed with security as the first goal fell on the junk heap of
> computing past and continue to do so. Well except for some very
> special cases where market share and cost doesn't mater. But even the
> NSA finds it cheaper to build totally isolated rooms, and I mean
> totally, to run software on insecure systems than try and develop
> custom things that are secure from the ground up. And they will likely
> have holes also, just not as many. Maybe.
I'm still confused, but I think it's as you said -- people want features.
Computers do not _have_ to allow external entities to have control at
all. The developers have chosen to include this for "service and
features" and failed to put in proper controls at the start, IMHO. A
PC on a network, for instance, should not accept any networked
instructions or upgrades without a security key. What's to stop some
well-intentioned but incompetent user from issuing his own upgrades
over the network and screwing everyone up?
I'll note in contrast that in IBM's System/360, critical functions by
the operating system had to be done in 'supervisor state' which was
strictly controlled by hardware. You could submit and execute an
application program that does damage but you can't touch the operating
system. Application programs are subject to various checks and
restrictions, including hardware blocks that was included in
System/360 from day one.
But the result is that the systems maintenance effort of a S/360 is
far more considerable than that required for a PC. Presumably few
owners would want to bother doing all the work necessary.
> What people do not realize is that an off the shelf Windows or Mac
> system with MS Office, Email, web surfing, iTunes, etc... is a more
> complicated system that their car or even the Apollo moon shots. It's
> very hard to touch one piece in isolation. And folks will argue that
> if design "right" this could all be avoided. To some degree they are
> correct. But it will never be perfect, even when folks try
> hard. Things are just too complicated for our minds or even our
> management structures to control it all.
I agree that it's complex. But I disagree it's insurmountable.
I am far from an expert. But IMHO too much sophistication was rushed
into the marketplace too fast without adequate protection built in.
IMHO the "young turks" didn't know their history and should've.
IBM's first real operating system for S/360, known as "OS" turned out
to be a disaster. It was extremely slow and a resource hog and
totally unsuited for low end machines as intended. They couldn't
release it as is. They developed some alternatives (DOS, BOS, BSP,
TOS), so people could at least use the new hardware and delayed
everything for about a year, almost secretly putting IBM into
bankruptcy (lots of costs, no revenues). The point is that they chose
to wait. They probably should've waited even longer than they did, I
think it took a while for the early production OS to be decent.
Modern developers should've learned from that experience: "The birth
of a baby takes nine months no matter how many women are involved" and
"adding people to a late project only makes it later", said the mgr of
In the very early days of computers the users were all programmers
presumably with good intentions and skills. But by the 1960s it was
clear the user community would be large with a variety of skill
levels. Computer designers put in safety checks so program bugs
(intentional or accidental) would only hurt the responsible user, not
everyone else. Things like file restrictions, time limits, resource
limits, kept control on things. Some controls were done by the human
operators who simply wouldn't allow certain jobs to run. By the 1980s
these controls were sophisticated and automated. A corporate
programmer couldn't go into the payroll system and give himself a
What I don't understand is why the PC world, especially when used in
networking and Internet service, failed to adopt the same controls the
mainframe world did.
Thanks again for your explanations!
[public replies please]