Solved by verified expert:i need a summary writing for an article is called “Missing Persons”. What are his key points and what major strength does his article have, as well as what would you see as his main weakness in this article? In addition, what do you think accounts for a changed relationship between humans and machines/technology?. it has to be 1 to 2 page. please use simple words.
missing_person.docx
Unformatted Attachment Preview
CHAPTER 1
Missing Persons
SOFTWARE EXPRESSES IDEAS about everything from the nature of a
musical note to the nature of personhood. Software is also subject to an
exceptionally rigid process of “lock-in.” Therefore, ideas (in the present era,
when human affairs are increasingly software driven) have become more
subject to lock-in than in previous eras. Most of the ideas that have been locked
in so far are not so bad, but some of the so-called web 2.0 ideas are stinkers, so
we ought to reject them while we still can. Speech is the mirror of the soul; as a
man speaks, so is he. PUBLILIUS SYRUS Fragments Are Not People
Something started to go wrong with the digital revolution around the turn of the
twenty-first century. The World Wide Web was flooded by a torrent of petty
designs sometimes called web 2.0. This ideology promotes radical freedom on
the surface of the web, but that freedom, ironically, is more for machines than
people. Nevertheless, it is sometimes referred to as “open culture.” Anonymous
blog comments, vapid video pranks, and lightweight mashups may seem trivial
and harmless, but as a whole, this widespread practice of fragmentary,
impersonal communication has demeaned interpersonal interaction.
Communication is now often experienced as a superhuman phenomenon that
towers above individuals. A new generation has come of age with a reduced
expectation of what a person can be, and of who each person might become.
The Most Important Thing About a Technology Is How It Changes People
When I work with experimental digital gadgets, like new variations on virtual
reality, in a lab environment, I am always reminded of how small changes in the
details of a digital design can have profound unforeseen effects on the
experiences of the humans who are playing with it. The slightest change in
something as seemingly trivial as the ease of use of a button can sometimes
completely alter behavior patterns. For instance, Stanford University researcher
Jeremy Bailenson has demonstrated that changing the height of one‟s avatar in
immersive virtual reality transforms self-esteem and social self-perception.
Technologies are extensions of ourselves, and, like the avatars in Jeremy‟s lab,
our identities can be shifted by the quirks of gadgets. It is impossible to work
with information technology without also engaging in social engineering. One
might ask, “If I am blogging, twittering, and wikiing a lot, how does that change
who I am?” or “If the „hive mind‟ is my audience, who am I?” We inventors of
digital technologies are like stand-up comedians or neurosurgeons, in that our
work resonates with deep philosophical questions; unfortunately, we‟ve proven
to be poor philosophers lately. When developers of digital technologies design a
program that requires you to interact with a computer as if it were a person, they
ask you to accept in some corner of your brain that you might also be conceived
of as a program. When they design an internet service that is edited by a vast
anonymous crowd, they are suggesting that a random crowd of humans is an
organism with a legitimate point of view. Different media designs stimulate
different potentials in human nature. We shouldn‟t seek to make the pack
mentality as efficient as possible. We should instead seek to inspire the
phenomenon of individual intelligence. “What is a person?” If I knew the
answer to that, I might be able to program an artificial person in a computer.
But I can‟t. Being a person is not a pat formula, but a quest, a mystery, a leap of
faith. Optimism It would be hard for anyone, let alone a technologist, to get up
in the morning without the faith that the future can be better than the past. Back
in the 1980s, when the internet was only available to small number of pioneers,
I was often confronted by people who feared that the strange technologies I was
working on, like virtual reality, might unleash the demons of human nature. For
instance, would people become addicted to virtual reality as if it were a drug?
Would they become trapped in it, unable to escape back to the physical world
where the rest of us live? Some of the questions were silly, and others were
prescient. How Politics Influences Information Technology I was part of a
merry band of idealists back then. If you had dropped in on, say, me and John
Perry Barlow, who would become a cofounder of the Electronic Frontier
Foundation, or Kevin Kelly, who would become the founding editor of Wired
magazine, for lunch in the 1980s, these are the sorts of ideas we were bouncing
around and arguing about. Ideals are important in the world of technology, but
the mechanism by which ideals influence events is different than in other
spheres of life. Technologists don‟t use persuasion to influence you—or, at
least, we don‟t do it very well. There are a few master communicators among us
(like Steve Jobs), but for the most part we aren‟t particularly seductive. We
make up extensions to your being, like remote eyes and ears (web-cams and
mobile phones) and expanded memory (the world of details you can search for
online). These become the structures by which you connect to the world and
other people. These structures in turn can change how you conceive of yourself
and the world. We tinker with your philosophy by direct manipulation of your
cognitive experience, not indirectly, through argument. It takes only a tiny
group of engineers to create technology that can shape the entire future of
human experience with incredible speed. Therefore, crucial arguments about the
human relationship with technology should take place between developers and
users before such direct manipulations are designed. This book is about those
arguments. The design of the web as it appears today was not inevitable. In the
early 1990s, there were perhaps dozens of credible efforts to come up with a
design for presenting networked digital information in a way that would attract
more popular use. Companies like General Magic and Xanadu developed
alternative designs with fundamentally different qualities that never got out the
door. A single person, Tim Berners-Lee, came to invent the particular design of
today‟s web. The web as it was introduced was minimalist, in that it assumed
just about as little as possible about what a web page would be like. It was also
open, in that no page was preferred by the architecture over another, and all
pages were accessible to all. It also emphasized responsibility, because only the
owner of a website was able to make sure that their site was available to be
visited. Berners-Lee‟s initial motivation was to serve a community of
physicists, not the whole world. Even so, the atmosphere in which the design of
the web was embraced by early adopters was influenced by idealistic
discussions. In the period before the web was born, the ideas in play were
radically optimistic and gained traction in the community, and then in the world
at large. Since we make up so much from scratch when we build information
technologies, how do we think about which ones are best? With the kind of
radical freedom we find in digital systems comes a disorienting moral
challenge. We make it all up—so what shall we make up? Alas, that dilemma—
of having so much freedom—is chimerical. As a program grows in size and
complexity, the software can become a cruel maze. When other programmers
get involved, it can feel like a labyrinth. If you are clever enough, you can write
any small program from scratch, but it takes a huge amount of effort (and more
than a little luck) to successfully modify a large program, especially if other
programs are already depending on it. Even the best software development
groups periodically find themselves caught in a swarm of bugs and design
conundrums. Little programs are delightful to write in isolation, but the process
of maintaining large-scale software is always miserable. Because of this, digital
technology tempts the programmer‟s psyche into a kind of schizophrenia. There
is constant confusion between real and ideal computers. Technologists wish
every program behaved like a brand-new, playful little program, and will use
any available psychological strategy to avoid thinking about computers
realistically. The brittle character of maturing computer programs can cause
digital designs to get frozen into place by a process known as lock-in. This
happens when many software programs are designed to work with an existing
one. The process of significantly changing software in a situation in which a lot
of other software is dependent on it is the hardest thing to do. So it almost never
happens. Occasionally, a Digital Eden Appears One day in the early 1980s, a
music synthesizer designer named Dave Smith casually made up a way to
represent musical notes. It was called MIDI. His approach conceived of music
from a keyboard player‟s point of view. MIDI was made of digital patterns that
represented keyboard events like “key-down” and “key-up.” That meant it could
not describe the curvy, transient expressions a singer or a saxophone player can
produce. It could only describe the tile mosaic world of the keyboardist, not the
watercolor world of the violin. But there was no reason for MIDI to be
concerned with the whole of musical expression, since Dave only wanted to
connect some synthesizers together so that he could have a larger palette of
sounds while playing a single keyboard. In spite of its limitations, MIDI became
the standard scheme to represent music in software. Music programs and
synthesizers were designed to work with it, and it quickly proved impractical to
change or dispose of all that software and hardware. MIDI became entrenched,
and despite Herculean efforts to reform it on many occasions by a multi-decade-
long parade of powerful international commercial, academic, and professional
organizations, it remains so. Standards and their inevitable lack of prescience
posed a nuisance before computers, of course. Railroad gauges—the dimensions
of the tracks—are one example. The London Tube was designed with narrow
tracks and matching tunnels that, on several of the lines, cannot accommodate
air-conditioning, because there is no room to ventilate the hot air from the
trains. Thus, tens of thousands of modern-day residents in one of the world‟s
richest cities must suffer a stifling commute because of an inflexible design
decision made more than one hundred years ago. But software is worse than
railroads, because it must always adhere with absolute perfection to a
boundlessly particular, arbitrary, tangled, intractable messiness. The
engineering requirements are so stringent and perverse that adapting to shifting
standards can be an endless struggle. So while lock-in may be a gangster in the
world of railroads, it is an absolute tyrant in the digital world. Life on the
Curved Surface of Moore’s Law The fateful, unnerving aspect of information
technology is that a particular design will occasionally happen to fill a niche
and, once implemented, turn out to be unalterable. It becomes a permanent
fixture from then on, even though a better design might just as well have taken
its place before the moment of entrenchment. A mere annoyance then explodes
into a cataclysmic challenge because the raw power of computers grows
exponentially. In the world of computers, this is known as Moore‟s law.
Computers have gotten millions of times more powerful, and immensely more
common and more connected, since my career began—which was not so very
long ago. It‟s as if you kneel to plant a seed of a tree and it grows so fast that it
swallows your whole village before you can even rise to your feet. So software
presents what often feels like an unfair level of responsibility to technologists.
Because computers are growing more powerful at an exponential rate, the
designers and programmers of technology must be extremely careful when they
make design choices. The consequences of tiny, initially inconsequential
decisions often are amplified to become defining, unchangeable rules of our
lives. MIDI now exists in your phone and in billions of other devices. It is the
lattice on which almost all the popular music you hear is built. Much of the
sound around us—the ambient music and audio beeps, the ring-tones and
alarms—are conceived in MIDI. The whole of the human auditory experience
has become filled with discrete notes that fit in a grid. Someday a digital design
for describing speech, allowing computers to sound better than they do now
when they speak to us, will get locked in. That design might then be adapted to
music, and perhaps a more fluid and expressive sort of digital music will be
developed. But even if that happens, a thousand years from now, when a
descendant of ours is traveling at relativistic speeds to explore a new star
system, she will probably be annoyed by some awful beepy MIDI-driven music
to alert her that the antimatter filter needs to be recalibrated. Lock-in Turns
Thoughts into Facts Before MIDI, a musical note was a bottomless idea that
transcended absolute definition. It was a way for a musician to think, or a way
to teach and document music. It was a mental tool distinguishable from the
music itself. Different people could make transcriptions of the same musical
recording, for instance, and come up with slightly different scores. After MIDI,
a musical note was no longer just an idea, but a rigid, mandatory structure you
couldn‟t avoid in the aspects of life that had gone digital. The process of lock-in
is like a wave gradually washing over the rulebook of life, culling the
ambiguities of flexible thoughts as more and more thought structures are
solidified into effectively permanent reality. We can compare lock-in to
scientific method. The philosopher Karl Popper was correct when he claimed
that science is a process that disqualifies thoughts as it proceeds—one can, for
example, no longer reasonably believe in a flat Earth that sprang into being
some thousands of years ago. Science removes ideas from play empirically, for
good reason. Lock-in, however, removes design options based on what is
easiest to program, what is politically feasible, what is fashionable, or what is
created by chance. Lock-in removes ideas that do not fit into the winning digital
representation scheme, but it also reduces or narrows the ideas it immortalizes,
by cutting away the unfathomable penumbra of meaning that distinguishes a
word in natural language from a command in a computer program. The criteria
that guide science might be more admirable than those that guide lock-in, but
unless we come up with an entirely different way to make software, further
lock-ins are guaranteed. Scientific progress, by contrast, always requires
determination and can stall because of politics or lack of funding or curiosity.
An interesting challenge presents itself: How can a musician cherish the
broader, less-defined concept of a note that preceded MIDI, while using MIDI
all day long and interacting with other musicians through the filter of MIDI? Is
it even worth trying? Should a digital artist just give in to lock-in and accept the
infinitely explicit, finite idea of a MIDI note? If it‟s important to find the edge
of mystery, to ponder the things that can‟t quite be defined—or rendered into a
digital standard—then we will have to perpetually seek out entirely new ideas
and objects, abandoning old ones like musical notes. Throughout this book, I‟ll
explore whether people are becoming like MIDI notes—overly defined, and
restricted in practice to what can be represented in a computer. This has
enormous implications: we can conceivably abandon musical notes, but we
can‟t abandon ourselves. When Dave made MIDI, I was thrilled. Some friends
of mine from the original Macintosh team quickly built a hardware interface so
a Mac could use MIDI to control a synthesizer, and I worked up a quick music
creation program. We felt so free—but we should have been more thoughtful.
By now, MIDI has become too hard to change, so the culture has changed to
make it seem fuller than it was initially intended to be. We have narrowed what
we expect from the most commonplace forms of musical sound in order to
make the technology adequate. It wasn‟t Dave‟s fault. How could he have
known? Digital Reification: Lock-in Turns Philosophy into Reality A lot of the
locked-in ideas about how software is put together come from an old operating
system called UNIX. It has some characteristics that are related to MIDI. While
MIDI squeezes musical expression through a limiting model of the actions of
keys on a musical keyboard, UNIX does the same for all computation, but using
the actions of keys on typewriter-like keyboards. A UNIX program is often
similar to a simulation of a person typing quickly. There‟s a core design feature
in UNIX called a “command line interface.” In this system, you type
instructions, you hit “return,” and the instructions are carried out.* A unifying
design principle of UNIX is that a program can‟t tell if a person hit return or a
program did so. Since real people are slower than simulated people at operating
keyboards, the importance of precise timing is suppressed by this particular
idea. As a result, UNIX is based on discrete events that don‟t have to happen at
a precise moment in time. The human organism, meanwhile, is based on
continuous sensory, cognitive, and motor processes that have to be
synchronized precisely in time. (MIDI falls somewhere in between the concept
of time embodied in UNIX and in the human body, being based on discrete
events that happen at particular times.) UNIX expresses too large a belief in
discrete abstract symbols and not enough of a belief in temporal, continuous,
nonabstract reality; it is more like a typewriter than a dance partner. (Perhaps
typewriters or word processors ought to always be instantly responsive, like a
dance partner—but that is not yet the case.) UNIX tends to “want” to connect to
reality as if reality were a network of fast typists. If you hope for computers to
be designed to serve embodied people as well as possible people, UNIX would
have to be considered a bad design. I discovered this in the 1970s, when I tried
to make responsive musical instruments with it. I was trying to do what MIDI
does not, which is work with fluid, hard-to-notate aspects of music, and
discovered that the underlying philosophy of UNIX was too brittle and clumsy
for that. The arguments in favor of UNIX focused on how computers would get
literally millions of times faster in the coming decades. The thinking was that
the speed increase would overwhelm the timing problems I was worried about.
Indeed, today‟s computers are millions of times faster, and UNIX has become
an ambient part of life. There are some reasonably expressive tools that have
UNIX in them, so the speed increase has sufficed to compensate for UNIX‟s
problems in some cases. But not all. I have an iPhone in my pocket, and sure
enough, the thing has what is essentially UNIX in it. An unnerving element of
this gadget is that it is haunted by a weird set of unpredictable user interface
delays. One‟s mind waits for the response to the press of a virtual button, but it
doesn‟t come for a while. An odd tension builds during that moment, and easy
intuition is replaced by nervousness. It is the ghost of UNIX, still refusing to
accommodate the rhythms of my body and my mind, after all these years. I‟m
not picking in particular on the iPhone (which I‟ll praise in another context later
on). I could just as easily have chosen any contemporary personal computer.
Windows isn‟t UNIX, but it does share UNIX‟s idea that a symbol is more
important than the flow of time and the underlying continuity of experience.
The grudging relationship between UNIX and the tem …
Purchase answer to see full
attachment
You will get a plagiarism-free paper and you can get an originality report upon request.
All the personal information is confidential and we have 100% safe payment methods. We also guarantee good grades
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more