These Pain Points Are Preventing Companies from Adopting Deep Learning

 

These Pain Points Are Preventing Companies from Adopting Deep Learning

By 

Takeaway: Deep learning has a lot to offer businesses, but many are still hesitant to adopt it. Here we look at some of its biggest pain points.

Deep learning is a subfield of machine learning, which (generally speaking) is technology that is inspired by the human brain and its functions. First introduced in the 1950s, machine learning is cumulatively informed by what is known as the artificial neural network, a plethora of interconnected data nodes that collectively form the basis for artificial intelligence. (For the basics of machine learning, check out Machine Learning 101.)
Machine learning essentially allows computer programs to change themselves when prompted by external data or programming. By nature, it is able to accomplish this without human interaction. It shares similar functionality with data mining, but with mined results to be processed by machines rather than humans. It is divided into two major categories: supervised and unsupervised learning.
Supervised machine learning involves the inference of predetermined operations through labeled training data. In other words, supervised results are known in advance by the (human) programmer, but the system inferring the results is trained to “learn” them. Unsupervised machine learning, by contrast, draws inferences from unlabeled input data, often as a means to detect unknown patterns.
Deep learning is unique in its ability to train itself through hierarchical algorithms, as opposed to the linear algorithms of machine learning. Deep learning hierarchies are increasingly complex and abstract as they develop (or “learn”) and do not rely on supervised logic. Simply put, deep learning is a highly advanced, accurate and automated form of machine learning, and is at the forefront of artificial intelligence technology.

Business Applications of Deep Learning

Machine learning is already commonly used in several different industries. Social media, for instance, uses it to curate content feeds in user timelines. Google Brain was founded several years ago with the intent of productizing deep learning across Google’s range of services as the technology evolves.
With its focus on predictive analytics, the field of marketing is particularly invested in deep learning innovation. And since data accumulation is what drives the technology, industries like sales and customer support (which already possess a wealth of rich and diverse customer data) are uniquely positioned to adopt it at the ground level.
Early adaptation to deep learning could very well be the key determining factor in how much specific sectors benefit from the technology, especially in its earliest phases. Nevertheless, a few specific pain points are keeping many businesses from taking the plunge into deep learning technology investment.

The V’s of Big Data and Deep Learning

In 2001, an analyst for META Group (now Gartner) by the name of Doug Laney outlined what researchers perceived to be the three main challenges of big datavolumevariety and velocity. Over a decade and a half later, the rapid increase in points of access to the internet (due largely to the proliferation of mobile devicesand the rise of IoT technology) has brought these issues to the forefront for major tech companies as well as smaller businesses and startups alike. (To learn more about the three v's, see Today's Big Data Challenge Stems From Variety, Not Volume or Velocity.)
Recent statistics on global data usage are staggering. Studies indicate that roughly 90 percent of all of the world’s data was only created within the last couple of years. Worldwide mobile traffic amounted to roughly seven exabytes per month over 2016, according to one estimate, and that number is expected to increase by about seven times within the next half decade.
Beyond volume, variety (the rapidly increasing diversity in types of data as new media evolves and expands) and velocity (the speed at which electronic media is sent to data centers and hubs) are also major factors in how businesses are adapting to the burgeoning field of deep learning. And to expand on the mnemonic device, several other v-words have been added to the list of big data pain points in recent years, including: 
  • Validity: The measurt of input data accuracy in big data systems. Invalid data that goes undetected can cause significant problems as well as chain reactions in machine learning environments.
  • Vulnerability: Big data naturally evokes security concerns, simply by virtue of its scale. And although there is great potential seen in security systems that are enabled by machine learning, those systems in their current incarnations are noted for their lack of efficiency, particularly due to their tendency to generate false alarms.
  • Value: Proving the potential value of big data (in business or elsewhere) can be a significant challenge for any number of reasons. If any of the other pain points in this list cannot be effectively addressed, then they in fact could add negative value to any system or organization, perhaps even with catastrophic effect.
Other alliterative pain points that have been added to the list include variability, veracity, volatility and visualization – all presenting their own unique sets of challenges to big data systems. And more might still be added as the existing list (probably) tapers off over time. While it may seem a bit contrived to some, the mnemonic “v” list encompasses serious issues confronting big data that play an important role in the future of deep learning.

The Black Box Dilemma

One of the most attractive features of deep learning and artificial intelligence is that both are intended to solve problems that humans can’t. The same phenomena that is supposed to allow that, however, also presents an interesting dilemma, which comes in the form of what’s known as the “black box.”
The neural network created through the process of deep learning is so vast and so complex that its intricate functions are essentially inscrutable to human observation. Data scientists and engineers may have a thorough understanding of what goes into deep learning systems, but how they arrive at their output decisions more often than not goes completely unexplained.
While this might not be a significant issue for, say, marketers or salespeople (depending on what they're marketing or selling), other industries require a certain amount of process validation and reasoning in order to get any use out of the results. A financial services company, for instance, might use deep learning to establish a highly efficient credit scoring mechanism. But credit scores often must come with some sort of verbal or written explanation, which would be difficult to form if the actual credit scoring equation is totally opaque and unexplainable.
This problem extends to many other sectors as well, notably within the realms of health and safety. Medicine and transportation could both conceivably benefit in major ways from deep learning, but also face a significant obstacle in the form of the black box. Any output results in those fields, no matter how beneficial, could be wholly discarded on account of their underlying algorithms’ complete obscurity. This brings us to perhaps the most controversial pain point of them all…

Regulation

In the spring of 2016, the European Union passed the General Data Protection Regulation (GDPR), which (among other things) grants citizens the “right to an explanation” for automated decisions generated by machine learning systems that “significantly affect” them. Scheduled to take effect in 2018, the regulation is causing concern among tech companies who are invested in deep learning on account of its impenetrable black box, which would in many cases obstruct explanation mandated by the GDPR.
The “automated individual decision-making” that the GDPR intends to restrict is an essential feature of deep learning. But concerns over this technology are inevitable (and largely valid) when the potential for discrimination is so high and transparency so low. In the United States, the Food and Drug Administration similarly regulates the testing and marketing of drugs by requiring those processes to remain auditable. This has presented obstacles for the pharmaceutical industry, as has reportedly been the case for Massachusetts-based biotechnology company Biogen, which has been prevented from using uninterpretable deep learning methods due to the FDA rule.
The implications of deep learning (moral, practical and beyond) are unprecedented and, frankly, quite profound. A great deal of apprehension surrounds the technology due in large part to a combination of its disruptive potential and its opaque logic and functionality. If businesses can prove the existence of tangible value within deep learning that exceeds any conceivable threats or hazards, then they could help lead us through the next critical phase of artificial intelligence.
   
Written by Colyn Emery

Colyn Emery has worked in digital media since 2007. A lifetime Southern California native, he worked in video and broadcasting after earning his BFA in Creative Writing from Chapman University. He went on to earn his MFA in Broadcast Cinema from Art Center College of Design in 2012, and began working as a freelance writer in early 2015. Since then, he has written blogs and articles for several popular content sites, and has copywritten for numerous brands and startups. Full Bio




Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger