Today, Apple is going to unveil its newest
model of (yet another) iPhone. Called 6s ("six es" that rhymes with
"success"), it represents the quintessence of the newest
technological achievements from the smartphone industry that started in July
2007 (merely 8 years ago - can you believe it?) when iPhone 1 was launched.
So, what do we have to expect from Apple now?
The only hint was to ask Siri (voice recognition software/personal assistant)
for a hint. If one does so, Siri promptly replies "I hear something big is
happening on September 9" (in some cases people report Siri is just
teasing them saying something about being very cute when curious and things
like that).
One can only guess what Apple would come up
with this time. But according to many it would be a better personal assistant
operated by user's voice. Or it might be a more intelligent assistant.
How about having an artificial intelligence-type knowledge navigator right in
your smartphone? Your own personal Jesus?
In 2007 (a long time ago it now seems), I was
very interested in voice recognition and dictation pieces of software. Back
then, I worked as a research fellow at one English university and part of my
job was to write long and meaningless project reports for the EU Commission.
Typing on the keyboard was long and cumbersome and took days and weeks, so I
decided to speed up the process. My Dutch colleague told me about Dragon
Naturally Speaking which in those days was used primarily for the disabled
people who wanted to work on their personal computer but had difficulties
typing or operating the mouse. The Dutch colleague, a young professional and a
father of two adorable kids, was just diagnosed with some rare disease that
prevented him from operating his arms freely from his shoulders down. He liked
the DNS and was using it himself, so I bought it myself too.
The software came on two CDs and had to be
installed onto one's PC, where it created a
special folder with a constantly
growing library. Then followed several days of painful adjustments and reading
out loud from the provided texts about JFK and American history. The software
was learning how I pronounced certain words and phrases.
I used DNS for some time and then bought a new
laptop and was lazy to start the process over again. All my library of words
and styles stayed on my old laptop. I should say that I did not like the DNS
too much - the dictation process was slow, one had to use a special headset
with a super sensitive microphone but above all I discovered that my dictation
software did not fulfill the purpose I obtained it for. It was no use for
writing long reports for the EU Commission. E-mails and short notes was pretty
OK, but each I started dictating long chunks of text on tourism and digital
heritage, I came to a halt very soon.
Our perception of typing a text on a computer is
very specific and greatly differs from dictating the same text. We are
accustomed to scroll through the text with a mouse, to be able to add bits and
pieces here and deleting other bits and pieces there. Typing on a PC is full of
repetitive corrections. This is very different from working on a typing machine
(every mistake costs you the whole page) or writing with an ink pen. It seems
that the style of writing has changed considerably over the last 30 years as
personal computers became available. I also tried to record my monologue
and to let the software to process it (it enabled this feature too) – I remember
walking along Covent Garden and mumbling something in my MP3 recorder – but the
result was far from perfect. DNS did not omit the words it could not recognize –
it just invented words and the whole structures of its own.
Back then, I realized that all voice
recognition software was rubbish and returned back to typing my reports by
hand. Luckily for me, my contract soon expired and I could afford typing less.
In the meantime, the voice recognition made a giant leap forward. In 2011 Apple
introduced its iPhone 4s equipped with Siri, a piece of software it acquired one
year before.
Siri was far from perfect, of course. It
recognized the words but had problems with various accents and long sentences.
The voice recognition was far from perfect which could be used both for good
and for bad causes. In Stephen King’s 2014 novel “Mr. Mercedes”, Brady Hartsfield sets up a voice-operated security
system in his cellar pronouncing words like “darkness” or “doom” to deactivate the
self-destructive alarm on his computers intended to protect them from unwanted
intruders. However, the former detective Bill Hodges is able to deactivate it
with the help of his young associate Jerome Robinson, who imitates the timbre
and the sound of Hartsfield’s voice.
However, what if Apple has prepared something
more advanced this time? In a 2013 film “Her” directed by Spike
Jonze we learn a story of Theodore Twombly (played
by Joaquin Phoenix), a man who develops a
relationship with Samantha, an intelligent computer operating system personified through a female voice.
There were many jokes about people trying to start a relationship with Siri
(including Raj from “Big Bang Theory”) but it always failed, since Siri was not
an artificial intelligence and would inevitably fail a Turing test (a test
envisaged to distinguish a machine from human that was proposed by Alan Turing,
an Enigma code-breaker and an extraordinary British mathematician). What if
this would be different this time? Would you buy yourself a personal companion
(like a personal Jesus from a famous Depeche Mode’s 1989 song)? Would Siri be
able to become your best friend, your girlfriend or boyfriend? After all, we
all need a little attention and like to talk – why not to talk to Siri? She
will always be here, at a grip of your hand, in your new iPhone 6s. Let us wait
and see what Apple has up in its sleeve for us.