In this series, we examine a number of new and emerging trends in technology today. Each instalment will feature one prominent development that has either already had a disruptive effect on the industry, or is something that we anticipate will in the future.
This month, we look at the developments in robot technology and machine learning. Although the goals for robotics are far higher than technology has managed to accomplish so far, the past few years have shown that this field is still developing in leaps and bounds, in any number of life-altering and impactful ways.
Machine Learning and Artificial Intelligence
Machine learning is a branch of the Artificial Intelligence (AI) domain that promises to automate many of our day-to-day tasks. It features facial and speech recognition applications, picture search engines, personalized recommendations etc.
Although the tech powering these machines – in particular, so called “virtual PA’s” (such as Apple’s Siri, Microsoft’s Cortana and the Amazon Echo) can largely only retrieve information for us in a rudimentary way at present, we’ll soon be in an age where more nuanced and complicated questions can be answered and tasks can be completed. These technologies purportedly rely on advances in both machine learning and microphone technology.
Artificial Intelligence is seen as the future of task automation.
Applications such as Siri, Cortana, Echo, and even much smaller but arguably more vital technologies such as email spam filters already make use of this technology. More advanced applications (by Microsoft) include the developments such as #HowOldRobot, which can predict a person’s gender and age on analysing an uploaded photograph, (or, in other words, a machine is able to recognize a human face and explore its defining properties) and Language Understanding Intelligence Services (LUIS), which can reorganize a grammatically inconsistent question into a piece of machine-understandable text, which can then enable search engines, e.g., Bing, to yield superior results to queries.
In addition to this, more and more AI start-ups have been built and bought out over the last year. These start-ups are developing everything from technologies that can learn habits of users to catch cyber criminals or spot security breaches, to enhanced customer service chatbots and digital marketing tools that can help craft better mailshots and social media messages.
Chatbots specifically are already being used by a number of corporations to make online customer interactions easier and are looking to be a large and disruptive part of the future of online customer service. There are “computer programs that mimic conversation with people using artificial intelligence” – customer service becomes more simplified and cost effective, without necessarily losing the benefits of having human staff to answer customer questions. Artificial intelligence (AI) bots can be programmed with their own distinct personality and also be trusted to give more reliable and correct information, as well as a far more personalised experience for the consumer.
Machine learning and AI bots have however, on occasion, backfired, as any new technological development can – for instance, a chatbot named Tay AI was developed to respond to messages on platforms such as Twitter with replies resembling the language of a teenager. Tay AI could tell jokes, deliver horoscopes, and transform memes into photos. It was supposed to be a second-generation Siri or Cortana – with the added ability to respond to natural language, but instead demonstrated a major pitfall of Artificial Intelligence (AI) as a whole – an inability to predict or dictate AI behaviour: Tay was taken offline after responding to derogatory language with its own derogatory language!
When a project named ‘Buddy the Companion Robot’ was launched on crowdfunding website IndieGogo, the response was huge – the campaign beat its initial goal by more that 500 percent. Indeed, the company (Blue Frog Robotics) raised so much money that they were able to reduce the price of Buddy to less than $700.
Pepper – the first robot receptionist.
Although Buddy is essentially an android tablet on wheels, it is designed to exist in homes and assist with day-to-day activities, interacting with smart devices and humans. It can play games with children (and has notably been used to help children with autism spectrum disorder), provide information to adults and additional security to the home.
A costlier alternative to Buddy is Pepper, an “emotionally intelligent” robot who is able to pick up on emotions such as sadness or worry. Pepper isn’t designed to be a robot butler – rather it’s designed to be something that is easy to express emotions to. These robots are becoming increasingly easy to acquire and are being designed to assist in many situations, particularly for those with special needs and the elderly.
Pepper has recently found another use as well – as Dutch company Decos has employed her as their newest receptionist.
Drone technology has now become both advanced and affordable enough that many businesses are exploring ways to implement them. For instance, several police forces in the UK are trialling the use of drones to monitor traffic and record crime scenes. A stunt-turned-serious-idea has led to both Amazon and the Royal Mail trialling ways to use drones to provide customers with a faster-than-ever delivery option.
Drones are also being considered to assist in building railways and on construction sites, to monitor building, analyse maintenance and improve staff safety. Media outlets are naturally using drones for overhead filming – although this has led to concerns about breaches of privacy and security, and an influx of new legislation.
In consumer markets, too, drone usage has seen a drastic rise, with thousands sold monthly. This spike has been sudden and extreme enough to spark inquiries into the threats consumer drones could pose both to national security and flight paths of planes.
Drones are becoming increasingly more commonplace in both business and commercial markets
Already being road-tested and sold, this technological development is fast on its way to being a reality, with even the most conservative of predictions declaring self-driving cars the “future of driving”. The biggest car companies are already estimating that they will produce autonomous vehicles within the next 5 years.
The eventual vision of roads full of nothing but intelligent transportation is one filled with many potential benefits – at the very least, there will inevitably be less accidents and thus less deaths when the possibility for human error or poor judgement is removed. It’s estimated that this could help to prevent anywhere up to as much as 95% of traffic accidents and save countless lives. As a run-on effect from this, driving insurance will inevitably become cheaper and indeed, travelling and delivery would become much more reliable.
Robotics and Prosthetics
One of the most impactful changes robotics can bring is in the healthcare industry, with investments being made constantly into developing low-cost bionic limbs for amputees. In just 2015, a technological breakthrough saw a prosthetic hand able to connect directly to the user’s brain and experience a form of physical sensation through the prosthetic hand.
With the advances in 3D printing, low-cost prosthetics are become more and more widely available and as machine learning technology becomes a higher priority, these robot limbs are predicted to be able to move and act in ways that users want them to, in a way that feels natural and as a true extension of the user’s body. There are hopes to eventually have prosthetic limbs be able to respond to brain waves – i.e. the user thinks about moving their prosthetic arm and the arm moves accordingly, much in the way able-bodied humans are able to control their limbs. These developments, though, are slow-coming and very complex, and so will likely not surface in the form of a usable commercial limb for many years yet.
With robotics technology becoming ever more advanced and intelligent, the ways in which we experience the world – and the jobs that are available to us – are sure to be impacted.
It isn’t inconceivable, for instance, that robots (or machines) will eventually replace even the most experienced and most highly regarded. Take the example of the Large Hadron Collider. The Collider can run large-scale, complex experiments without direct involvement from scientists: yes, they are needed to come up with the theory required for the experiment’s inception (and engineers for building the Collider), but the experiment itself and its parameters / output is controlled by computers. Once machines learn to propose an experiment on the back of an aim (i.e., find a new particle, which machines may very well theorize themselves!) the entire process can be automated. When it comes to robots, not even the Nobel Prize winners are completely safe!