Terms & Keywords

IHE – Integrating the Healthcare Enterprise.
A standards-setting organization; Integrates standards such as DICOM, HL7 to provide solutions for real-world needs.

RSNA –  Radiology Society of North America.
An international society of Radiologists who promote advances in the field of Radiology and provide Software Solutions to support Electronic Health Records including Imaging of all modalities.

NIBIB National Institute of Biomedical Imaging and Bioengineering.
An innovating arm of the Department of Health and Human Services dedicated to advancing research and technologies to improve and innovate healthcare. Plays an important role in advancing Imaging with different modalities and promoting early detection.

DICOM – Digital Imaging and Communication in Medicine.
An Internationally recognized standard to transmit, store, retrieve, print, process and display medical imaging information. Standards fully incorporated in image-acquisition devices, PACS, and workstations throughout healthcare infrastructure.

HL7 – Health Level Seven is a standards-developing body.
Provides standards for the exchange, integration, and sharing and retrieval of electronic health information. ANSI Accredited – founded in 1987.

PACS – Picture Archiving & Communications System technology.
Utilizes DICOM standards for storage & retrieval of images.

RIS – Radiology Information System.
Integrates workflows in Radiology from initial order through to billing and sharing of exams within the radiology department. Utilizes HL7 and DICOM standards for integration of various devices.

EMPI – Enterprise Master Patient Index.
Database of patient records maintained across the healthcare organization, it maintains a consistent and unique patient identifier by merging and linking patient records. Provides a rich querying mechanism.

PIX – Patient Identifier Cross Referencing.
IHE Actor uses services provided by the EMPI. PIX API including Query API is defined by IHE.

 

Introduction/Overview

Healthcare professionals have a need to store and access relevant medical documents for any patient undercare. Document creation and submission, query & retrieval functions, and secure access must span the healthcare enterprise to provide timely care in the most efficient manner.

Patient exams that include imaging of varying types of modalities and diagnostic reports ultimately belong to the patient, and only the patient has the right to share this information with other caregiving facilities. Strict HIPAA compliance requires that any PHR must always be treated with utmost security and never leave the designated secure areas.

In Radiology, the process begins when an order for an exam is placed by a physician. As part of this process, a local patient-identifier is generated for the order. However, for the lifetime of the patient, there is a global, unique patient identifier that spans the healthcare enterprise. Any Electronic Medical Record must then be tracked and accessible via this global patient ID.

An EMPI or Enterprise Master Patient Index system that is deployed across the healthcare organization helps maintain a consistent and accurate view of a patient’s identity. This database identifies the patient by use of a Global Unique Patient Identifier and accesses the patient record. The patient record contains patient demographics such as date-of-birth, place-of-birth, current address, and numerous other fields. Any variances or multiple records result in the merging or linking of extant records. Tolerance for error is close to zero and is a huge challenge in maintaining integrity across the healthcare enterprise.

IHE (Integrating Healthcare Enterprise) specifications pull together many industry standards such as DICOM, DICOMWeb, MTOM/XOP and more to provide a technical framework (TF) for integrating healthcare systems. This technical-framework specifies IHE-Actors and their Transactions which, when combined together, define a specific integration profile. IHE Profiles can cover various aspects of Healthcare Integration are specified in the following documents:

  • IHE ITI TF-1
  • IHE ITI TF-2a and IHE ITI TF-2b
  • IHE ITI TF-2x
  • IHE ITI TF-3

The Radiology specific profiles are provided in the specifications listed below. The radiology profiles are based on the general specifications but provide additional protocols and fields that apply to Radiology. These sets of specifications also detail the RSNA Image Sharing Network requirements.

  • IHE RAD TF-1
  • IHE RAD TF-2
  • IHE RAD TF-3
  • TF Supplement for Cross-enterprise Document Reliable Interchange of Images – XDR-I
  • TF Supplement for Cross-enterprise Document Sharing for Imaging – XDS-I.b

There are numerous subfields of Radiology that are specified in IHE and IHE-RAD specifications, but the rest of this post mainly focuses on describing Integration Profiles required for the implementation of an Image Sharing Network.

 

Image-Shared Network

The reference graph below shows a combination of actors and the SOAP transaction used to carry out the workflow.

There is a distinct Client / Server relationship where the ISN provides services that allow the clients to perform the following:

  • Document submissions on behalf of a patient
  • Query and Retrieve transactions for the retrieval of
    • Patient information
    • Image-manifests associated with an exam
    • Diagnostic reports
    • Images associated with the exams

Within the ISN, various actors implement integration profiles in order to

  • Register a patient when a new order is generated
  • Persist received images to a permanent store
  • Register diagnostic reports and Image Manifest to support future query
  • Retrieve documents from a permanent store and serve them out to the external clients

Fig.1 ISN Reference Model

ISN Implementation

Sponsored by a grant from NIBIB, RSNA, Mayo Clinic, and research facilities associated with universities were given a charter to build a patient-centric image sharing network (ISN). This network is designed to automate and improve Radiology workflows by facilitating storage of patient documents to the cloud.

Rather than provide a copy of the exam on a CD, the patient can now choose to receive the exam electronically. This has significant implications, such as

  • Patient exams are centrally available

    • Patient documents can easily be accessed by any clinician providing care to the patien
  • The study does not have to be repeated if the patient has traveled to another region and needs care

lifeIMAGE, a startup company based in Newton, MA, provided the ISN solution to RSNA based workflows. lifeIMAGE was part of the original research group that pioneered the end to end solution strictly based on IHE & IHE-RAD specifications.  

The engineering team from BigR.io participated in the ISN Architecture evolution, its implementation, and subsequently validation and interoperability testing at Connectathons where vendors are required to perform live tests against other vendors in order to show full conformance. BigR.io, in collaboration with lifeIMAGE resources, demonstrated pure excellence in showing conformance and also assisted other teams to meet their objectives.

BigR.io’s knowledge in navigating a plethora of standards, such as DICOM and HL7, and its ability to innovate has proven to be a great asset towards providing a sound and robust solution.

The remainder of this post briefly provides the architecture and workflow details, specifically the RSNA Workflow.

ISN Reference Model and Workflow

The ISN Reference Model as shown in Fig.1 comprises three major functions:

  • The Edge Server Function
  • lifeIMAGE Registry as a Service
  • Patient Health Record Account Access
The Edge Server Function

The edge server Function is an application that integrates with RIS and PACS and is deployed on-premise in a healthcare facility. Participating healthcare enterprises are designated an Affinity Domain. The ISN Service itself is multi-tenant and is capable of supporting multiple Affinity Domains.

The workflow begins when an order is entered in the RIS. At this time, a local patient identifier is generated and registered with the PIX-Manager. The PIX associates the patient identifier to a global patient-identifier, and a new global patient identifier is created if one is not found.

On completion of the exam, the edge server constructs an XDR-i-based Provide & Register SOAP Request and sends the submission request to the lifeIMAGE Registry. The request can include the following:

  • Diagnostic Reports – HL7 Service is used to obtain diagnostic reports
  • Images (different modalities) – DICOM Service is used to obtain all images and MTOM/XOP is used to attach images to the SOAP Request
  • Metadata describing the request and patient information
    • Both local and global patient identifiers are sent with the Request

Note that XDR-i does not require the Image Manifest (KOS) as this is built by the XDR-Imaging Recipient component in the Clearinghouse.

The lifeIMAGE Registry or Clearinghouse

The Clearinghouse is a hosted service. Any number of Healthcare Enterprises may subscribe to this service to conduct their desired workflows.

The service is made up of various IHE-specified Actors and these Actors implement the IHE/RAD specific Integration Profiles. Actors & their transactions are as follows:

  1. XDR.Imaging Document Recipient (IDR)

    • The recipient can receive incoming SOAP Requests over any affinity-domain.
    • On receiving a valid request, the recipient retrieves all attachments and persists the data as necessary
  2. XDS Imaging Document Source (IDS)
    • IDR & IDS are grouped actors, and as such, they collaborate in processing of Provide and Register SOAP Requests
    • The IDS digests the received request to produce Image Manifest. This is metadata that describes the Images and Diagnostic Reports
    • The image-manifest and the diagnostic reports are then registered with the Repository Actor
      • To this effect, IDS acts as a Client. It generates a Register Imaging Document Set SOAP Messages and sends it to the XDS Repository Actor
  3. XDS Repository
    • XDS Repository is the keeper of Image Manifests (KOS) and Diagnostic Reports. It provides an ITI-43 Retrieve Document Set service to external clients
    • The Repository acts as a Client and sends an ITI-41 Register Document Set-b to the XDS Registry
  4. XDS Registry
    • XDS Registry maintains a database of all registered exams. The metadata received in the ITI-41 is persisted and is made available for future queries
    • The Registry provides an ITI-18 Registry Stored Query Service to the external clients
  5. PIX manager
    • The PIX Manager mainly tracks all Patient Identifiers and its API is used by the Edge Server and the PHR Access Points
    • The PIX maintains a single Global Unique Patient Identifier for a given patient. It associates all local patient-identifiers to a single global patient identifier as required by the ISN
The Patient Health Record (PHR) Account Access

PHR is an Edge Application that allows the end-users such as clinicians to access documents such as diagnostic reports and images submitted for a given patient.

In the RSNA Workflow, the patient is given an access key to access their exam electronically. To perform these actions the following Actors are implemented in the PHR:

  1. Document Consumer function

    • The document consumer first checks in with the PIX Manager to obtain a global patient-identifier associated with the patient
    • This identifier is then used to perform a Registry Stored Query, anITI-18 SOAP Message to XDS Registry Service
    • The Registry returns metadata in the Response. This metadata is then used by the Imaging Document Consumer for image and report retrieval
  2. Imaging Document Consumer function
    • The Imaging Document Consumer uses RAD-69 Retrieve Imaging Document Set – SOAP Message to retrieve desired set of images from the XDS Imaging Document Source Service.

In Conclusion

Integrating Healthcare Enterprise is actively working to bring necessary modernization and efficiencies to the healthcare industry. The patient-centric workflow to share diagnostic exams is just one example of the integration profiles. There is a fair amount of innovation that lies ahead of us to modernize and make healthcare enterprises IT Infrastructure secure, robust, and efficient.

About the author

Sushil is a Principal Architect at BigR.io. He leads a team of engineers from lifeIMAGE and BigR.io to deliver a robust, conformant solution for ISN.

NLP evolved to be an important way to track and categorize viewership in the age of cookie-less ad targeting. While users resist being identified by a single user ID, they are much less sensitive to and even welcome the chance for advertisers to personalize media content based on discovered preferences. This personalization comes from improvements made upon the original LDA algorithm and incorporate word2vec concepts.

The classic LDA algorithm developed at Columbia University raised industry-wide interest in computerized understanding of documents. It incidentally also launched variational inference as a major research direction in Bayesian modeling. The ability of LDA to process massive amounts of documents, extract their main theme based on a manageable set of topics and compute with relative high efficiency (compared to the more traditional Monte Carlo methods which sometimes run for months) made LDA the de facto standard in document classification.

However, the original LDA approach left the door open on certain desirable properties. It is, at the end, fundamentally just a word counting technique. Consider these two statements:

“His next idea will be the breakthrough the industry has been waiting for.”

“He is praying that his next idea will be the breakthrough the industry has been waiting for.”

After removal of common stop words, these two semantically opposite sentences have almost identical word count features. It would be unreasonable to expect a classifier to tell them apart if that’s all you provide it as inputs.

The latest advances in the field improve upon the original algorithm on several fronts. Many of them incorporate the word2vec concept where an embedded vector is used to represent each word in a way that reflects its semantic meaning. E.g. king – man + woman = queen

Autoencoder variational inference (AVITM) speeds up inference on new documents that are not part of the training set. It’s variant prodLDA uses product of experts to achieve higher topic coherence. Topic-based classification can potentially perform better as a result.

Doc2vec – generates semantically meaningful vectors to represent a paragraph or entire document in a word order preserving manner.

LDA2vec – derives embedded vectors for the entire document in the same semantic space as the word vectors.

Both Doc2vec and LDA2vec provide document vectors ideal for classification applications.

All these new techniques achieve scalability using either GPU or parallel computing. Although research results demonstrate a significant improvement in topic coherence, many investigators now choose to deemphasize topic distribution as the means of document interpretation. Instead, the unique numerical representation of the individual documents became the primary concern when it comes to classification accuracy. The derived topics are often treated as simply intermediate factors, not unlike the filtered partial image features in a convolutional neural network.

With all this talk of the bright future of Artificial Intelligence (AI), it’s no surprise that almost every industry is looking into how they will reap the benefits from the forthcoming (dare I say already existing?) AI technologies. For some, AI will merely enhance the technologies already being used. For others, AI is becoming a crucial component to keeping the industry alive. Healthcare is one such industry.

The Problem: Diminishing Labor Force

Part of the need for AI-based Healthcare stems from the concern that one-third of nurses are baby boomers, who will retire by 2030, taking their knowledge with them. This drastic shortage in healthcare workers poses the imminent need for replacements and, while the enrollment numbers in nursing school stay stable, the demand for experienced workers will continue to increase. This need for additional clinical support is one area where AI comes into play. In fact, these emerging technologies will not only help serve as a multiplier force for experienced nurses, but for doctors and clinical staff support as well.

Healthcare-AI Automation Applications to the Rescue

One of the most notable solutions for this shortage will be automating processes for determining whether or not a patient actually needs to visit a doctor in-person. Doctors’ offices are currently inundated with appointments and patients who’s lower-level questions and concerns could be addressed without a face-to-face consultation via mobile applications. Usually in the from of chatbots, these AI-powered applications can provide basic healthcare support by “bringing the doctor to the patient” and alleviating the need for the patient to leave the comfort of their home, let alone scheduling an appointment to go in-office and visit a doctor (saving time and resources for all parties involved).

Should a patient need to see a doctor,  these applications also contain schedulers capable of determining appointment type, length, urgency, and available dates/times, foregoing the need for constant human-based clinical support and interaction. With these AI schedulers also comes AI-based Physician’s Assistants that provide additional in-office support like scheduling follow-up appointments, taking comprehensive notes for doctors, ordering specific prescriptions and lab testing, providing drug interaction information for current prescriptions, etc. And this is just one high-level AI-based Healthcare solution (albeit with many components).

With these advancements, Healthcare stands to gain significant ground with the help of domain-specific AI capabilities that were historically powered by humans. As a result, the next generation of healthcare has already begun, and it’s being revolutionized by AI.

 

Growing Market Demands

Steady market growth and new approaches to managing data and effectively leveraging insights (Machine Learning, Data Lakes, Enterprise Data Hubs), in conjunction with the uncertainty of the government’s approach to H1-Bs, H-4s, and OPT, have created a perfect storm of demand for highly-skilled, US-based data engineers who are well-versed in Big Data and Machine Learning technologies. This rapid growth, speed of change in commercially-proven technology, and high demand for skilled techs has outstripped the available pool of talent. As a result, techs have developed a tendency to creatively embellish their abilities in order to try and open the door into learning the skills they want to have instead of  accurately representing the skills that they have – aka putting the horse before the cart.  

To stay relevant in the rapidly-evolving technology sphere, engineers always want (and need) to learn the technologies that the market demands, and they need a chance to get themselves trained to meet these demands. For many, the preferred method is on-the-job training.  Though employers are often open to on-the-job training, they need to hire the experts who can provide the framework and knowledge base for those who follow. However, as new technologies emerge, building a knowledge-base team presents a catch-22 for the employer as they need the first wave of experts to begin the process, but they do not have the in-house knowledge to vet them effectively.  Consequently, employers reach out to new or third-party talent to help build this base.

Effective Screening

The problem then becomes effectively screening talent. As noted earlier, techs have started embellishing resumes and applications with the buzzwords for skills they want to have instead of the skill they have. This embellishing becomes a challenge for the first line of talent screeners as they rarely have the knowledge base to effectively test the capacity of an individual’s skills for these buzzwords. There is no doubt that many accomplished engineers can bring themselves up to speed in a relatively quick timeframe, and they bank on the idea that their learning curve can be completed before anyone notices that they do not have the expertise they purported to have. Unfortunately, many engineers do not have a realistic ability to self assess how long becoming skilled in Big Data and Machine Learning will take and end up spending valuable time and resources failing to close this gap.

For example, take the recent boom in the demand for “Sparkstars”. Sparkstars are engineers highly skilled in using both Spark and Scala. On a scale of 1 (novice) to 5 (expert), Sparkstars’ Spark/Scala skills easily fall on 5. Most Sparkstars start out as dime-a-dozen Java engineers since most Java engineers can acquire Scala with ease. As such, Java engineers wanting to become Sparkstars will add Spark/Scala qualifications to their resumes even though they haven’t acquired those skillsets yet and hope they can acquire them quickly on the job.

Skills Solutions

So, how can talent recruiters effectively go about testing whether techs actually have these skills or if they only seek to gain these skills?

Our solution to this challenge is in leveraging our proven outside consultants to help foster the proper framework for your data engineering team, vetting full-time hires through our current skilled team members, and provide immediate talent to hit the ground running allowing the power of Big Data and Machine Learning solutions to work for you.

 

The AI Revolution HAS begun!

Some curmudgeons are arguing Artificial Intelligence (AI) is a bastardized term and the hype is distracting. People are arguing that we don’t have the freethinking, sci-fiesque AI or, as some people refer to it, Artificial General Intelligence (AGI). I say so what. Those of us in the AI business aren’t delusional and know that AI, aka Machine Learning (ML), is a fine weapon to bring into a software application to make it more sophisticated. It took me a while to understand that is all we are doing and that still is super important and valuable to organizations. I might admit I was initially mystified by “AI”, but at the end of the day (today anyway), it’s a bunch of math, code, data (for training – more on that later), and algorithms that either classify (organize data so it’s more valuable for making predictions) or triage (make decisions on which branch to send a task – automation or those pesky humans). Don’t make light of the ability to classify and triage at this level of complexity. We are seeing powerful applications of ML that are making dramatic impacts on numerous parts of organizations. We have published an ebook that goes into more detail https://bigr.io/mlfieldguide/ and an ML Workflow https://bigr.io/mlworkflow/ to shed more light on embracing ML at a high level.

The math and coding needed to embrace AI are straightforward (for those skilled in the art). Access to relatively inexpensive compute power is certainly plentiful and also not a roadblock. The hard parts are 1) getting, grooming, and labeling data for the algorithms to use to learn how to accomplish new tasks, and 2) building accurate algorithms that use the state-of-the-art techniques and current, reliable libraries. Some of the same curmudgeons alluded to above are saying things like “machines can’t learn to be human-like by pattern matching from strings of labeled data”. I happen to agree, but again so what? We will see a natural progression toward AI that is more human-like. In the meantime, there is a lot we can do with what we currently have. High-quality, groomed, labeled strings of data pipelined in for training is a fine way to teach models to learn (ML models that is) new tasks – rule-driven or even unsupervised. Albeit a narrowly focused task, but still a task the machines can perform better and/or significantly cheaper than humans.

AI had some false starts over the years, but I can attest to the fact there are real budgets for and real initiatives surrounding AI. And not just at the big boys anymore. Amazon, Apple, Google, Netflix, Facebook, Microsoft, IBM, etc., have made great use of machine learning over the past decade or so. But recently, with major contributions to the open source, AI has been democratized. It is now possible for a boutique consulting firm like BigR.io to help companies employ AI as an extension to their existing data management, computer science, and statistical/analytics practices. Staged adoption is the key. Come in with eyes wide open and know that there are nuances that need to be “tuned”, but you will see an impact and it will likely be orders of magnitude better than your current methods. And don’t expect Gideon or Skynet.

Ever since Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky from the University of Toronto won the 2012 ImageNet competition with a deep convolutional neural network that beat the 2nd place team by ~41%, the “industry” has been paying attention. Academia has continuously supported furthering the AI cause over the years, but most companies and governments were leery of it as miserable failures had been the reputation… Then, in 2015, Microsoft won ImageNet with a model that surpassed human-level performance. With these milestones and benchmarks, adoption of machine learning has exploded over the past few years.

So, while we may not have AI with human-level intelligence (another way of saying it is that the machines can’t reason on their own), we do have AI that can imitate (i.e. replace) well-defined, domain-specific capabilities that were historically human-powered tasks. AI is proliferating marketing, advertising, sales, network and cyber security, business processes, and equipment maintenance, and will continue to work its way into new areas to augment or replace humans. Insurance companies are getting lift for the claims adjudication process, chatbots are supporting customer service, dialogue agents are selling products and services, sentiment models are driving portfolio management, radiological images are being assessed, cars are driving themselves, etc. These real-world examples coupled with the maturity of organizational readiness we are seeing when it comes to data management and engineering, is, in my opinion, evidence that the AI revolution has begun.

Sometimes I get to thinking that Alexa isn’t really my friend. I mean sure, she’s always polite enough (well, usually, but it’s normal for friends to fight, right?). But she sure seems chummy with that pickle-head down the hall too. I just don’t see how she can connect with us both — we’re totally different!

So that’s the state of the art of conversational AI: a common shared agent that represents an organization. A spokesman. I guess she’s doing her job, but she’s not really representing me or M. Pickle, and she can’t connect with either of us as well as she might if she didn’t have to cater to both of us at the same time. I’m exaggerating a little bit – there are some personalization techniques (*cough* crude hacks *cough*) in place to help provide a custom experience:

  • There is a marketplace of skills. Recently, I can even ask her to install one for me.
  • I have a user profile. She knows my name and zip code.
  • Through her marketplace, she can access my account and run my purchase through a recommendation engine (the better to sell you with, my dear!)
  • I changed her name to “Echo” because who has time for a third syllable? (If only I were hamming this up for the post; sadly, a true story)
  • And if I may digress to my other good friend Siri, she speaks British to me now because duh.

It’s a start but, if we’re honest, none of these change the agent’s personality or capabilities to fit with all of my quirks, moods, and ever-changing context and situation. Ok, then. What’s on my wishlist?

  • I want my own agent with its own understanding of me, able to communicate and serve as an extension of myself.
  • I want it to learn everything about how I speak. That I occasionally slip into a Western accent and say “ruf” instead of “roof”. That I throw around a lot of software dev jargon; Python is neither a trip to the zoo nor dinner (well, once, and it wasn’t bad. A little chewy.) That Pickle Head means my colleague S… nevermind. You get the idea.
  • I want my agent to extract necessary information from me in a way that fits my mood and situation. Am I running late for a life-changing meeting on a busy street uphill in a snowstorm? Maybe I’m just goofing around at home on a Saturday.
  • I want my agent to learn from me. It doesn’t have to know how to do everything on this list out of the box – that would be pretty creepy – but as it gets to know me it should be able to pick up on my cues, not to mention direct instructions.

Great, sign me up! So how do I get one? The key is to embrace training (as opposed to coding, crafting, and other manual activities). As long as there is a human in the loop, it is simply impossible to scale an agent platform to this level of personalization. There would be a separate and ongoing development project for every single end user… great job security for developers, but it would have to sell an awful lot of stuff.

To embrace training, we need to dissect what goes into training. Let’s over-simplify the “brain” of a conversational AI for a moment: we have NLU (natural language understanding), DM (dialogue management), and NLG (natural language generation). Want an automatically-produced agent? You have to automate all three of these components.

  • NLU – As of this writing, this is the most advanced component of the three. Today’s products often do incorporate at least some training automation, and that’s been a primary enabler that leads to the assistants that we have now. Improvements will need to include individualized NLU models that continually learn from each user, and the addition of (custom, rapid) language models that can expand upon the normal and ubiquitous day-to-day vocabulary to include trade-specific, hobby-specific, or even made-up terms. Yes, I want Alexa to speak my daughter’s imaginary language with her.
  • DM – Sorry developers, if we make plugin skills ala Mobile Apps 2.0 then we aren’t going to get anywhere. Dialogues are just too complex, and rules and logic are just too brittle. This cannot be a programming exercise. Agents must learn to establish goals and reason about using conversation to achieve those goals in an automated fashion.
  • NLG – Sorry marketing folks, there isn’t brilliant copy for you to write. The agent needs the flexibility to communicate to the user in the most effective way, and it can’t do that if it’s shackled by canned phrases that “reflect the brand”.

In my experience, most current offerings are focusing on the NLU component – and that’s awesome! But to realize the potential of MicroAgents (yeah, that’s right. MicroAgents. You heard it here first) we need to automate the entire agent, which is easier said than done. But that’s not to say that it’s not going to happen anytime soon – in fact, it might happen sooner than you think.  

Echo, I’m done writing. Post this sucker.

Doh!


 

In the 2011 Jeopardy! face-off between IBM’s Watson and Jeopardy! champions Ken Jennings and Brad Rutter, Jennings acknowledged his brutal takedown by Watson during the last double jeopardy in stating “I for one welcome our new computer overlords.” This display of computer “intelligence” sparked mass amounts of conversation amongst myriad groups of people, many of whom became concerned at what they perceived as Watson’s ability to think like a human. But, as BigR.io’s Director of Business Development Andy Horvitz points out in his blog “Watson’s Reckoning,” even the Artificial Intelligence technology with which Watson was produced is now obsolete.

The thing is, while Watson was once considered to be the cutting-edge technology of Artificial Intelligence, Artificial Intelligence itself isn’t even cutting-edge anymore. Now, before you start lecturing me about how AI is cutting-edge, let me explain.

Defining Artificial Intelligence

You see, as Bernard Marr points out, Artificial Intelligence is the overarching term for machines having the ability to carry out human tasks. In this regard, modern AI as we know it has already been around for decades – since the 1950s at least (especially thanks to the influence of Alan Turing). Moreso, some form of the concept of artificial intelligence dates back to ancient Greece when philosophers started describing human thought processes as a symbolic system. It’s not a new concept, and it’s a goal that scientists have been working towards for as long as there have been machines.

The problem is that the term “artificial intelligence” has become a colloquial term applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving.” But the thing is, AI isn’t necessarily synonymous with “human thought capable machines.” Any machine that can complete a task in a similar way that a human might can be considered AI. And in that regard, AI really isn’t cutting-edge.

What is cutting-edge are the modern approaches to Machine Learning, which have become the cusp of “human-like” AI technology (like Deep Learning, but that’s for another blog).

Though many people (scientists and common folk alike) use the terms AI and Machine Learning interchangeably, Machine Learning actually has the narrower focus of using the core ideas of AI to help solve real-world problems. For example, while Watson can perform the seemingly human task of critically processing and answering questions (AI), it lacks the ability to use these answers in a way that’s pragmatic to solve real-world problems, like synthesizing queried information to find a cure for cancer (Machine Learning).

Additionally, as I’m sure you already know, Machine Learning is based upon the premise that these machines train themselves with data rather than by being programmed, which is not necessarily a requirement of Artificial Intelligence overall.

https://xkcd.com/1838/

Why Know the Difference?

So why is it important to know the distinction between Artificial Intelligence and Machine Learning? Well, in many ways, it’s not as important now as it might be in the future. Since the two terms are used so interchangeably and Machine Learning is seen as the technology driving AI, hardly anyone would correct you if were you to use them incorrectly. But, as technology is progressing ever faster, it’s good practice to know some distinction between these terms for your personal and professional gains.

Artificial Intelligence, while a hot topic, is not yet widespread – but it might be someday. For now, when you want to inquire about AI for your business (or personal use), you probably mean Machine Learning instead. By the way, did you know we can help you with that? Find out more here.

We’re seeing and doing all sorts of interesting work in the Image domain. Recent blog posts, white papers, and roundtables capture some of this work, such as image segmentation and classification to video highlights. But an Image area of broad interest that, to this point, we’ve but scratched the surface of is Video-based Anomaly Detection. It’s a challenging data science problem, in part due to the velocity of data streams and missing data, but has wide-ranging solution applicability.

In-store monitoring of customer movements and behavior.

Motion sensing, the antecedent to Video-based Anomaly Detection, isn’t new and there are a multitude of commercial solutions in that area. Anomaly Detection is something different and it opens the door to new, more advanced applications and more robust deployments. Part of the distinction between the two stems from “sensing” what’s usual behavior and what’s different.

Anomaly Detection

Walkers in the park look “normal”. The bicyclist is the anomaly. 

Anomaly detection requires the ability to understand a motion “baseline” and to trigger notifications based on deviations from that baseline. Having this ability offers the opportunity to deploy AI-monitored cameras in many more real-world situations across a wide range of security use cases, smart city monitoring, and more, wherein movements and behaviors can be tracked and measured with higher accuracy and at a much larger scale than ever before.

With 500 million video cameras in the world tracking these movements, a new approach is required to deal with this mountain of data. For this reason, Deep Learning and advances in edge computing are enabling a paradigm shift from video recording and human watchers toward AI monitoring. Many systems will have humans “in the loop,” with people being alerted to anomalies. But others won’t. For example, in the near future, smart cities will automatically respond to heavy traffic conditions with adjustments to the timing of stoplights, and they’ll do so routinely without human intervention.

Human in the Loop

Human in the loop.

As on many AI fronts, this is an exciting time and the opportunities are numerous. Stay tuned for more from BigR.io, and let’s talk about your ideas on Video-based Anomaly Detection or AI more broadly.

A few months back, Treasury Secretary Steve Mnuchin said that AI wasn’t on his radar as a concern for taking over the American labor force and went on to say that such a concern might be warranted in “50 to 100 more years.” If you’re reading this, odds are you also think this is a naive, ill-informed view.

An array of experts, including Mnuchin’s former employer, Goldman Sachs, disagree with this viewpoint. As PwC states, 38% of US jobs will be gone by 2030. On the surface, that’s terrifying, and not terribly far into the future. It’s also a reasonable, thoughtful view, and a future reality for which we should prepare.

Naysayers maintain that the same was said of the industrial and technological revolutions and pessimistic views of the future labor market were proved wrong. This is true. Those predicting doom in those times were dead wrong. In both cases, technological advances drove massive economic growth and created huge numbers of new jobs.

Is this time different?

It is. Markedly so.

The industrial revolution delegated our labor to machines. Technology has tackled the mundane and repetitive, connected our world, and, more, has substantially enhanced individual productivity. These innovations replaced our muscle and boosted the output of our minds. They didn’t perform human-level functions. The coming wave of AI will.

Truckers, taxi and delivery drivers, they are the obvious, low-hanging fruit, ripe for AI replacement. But the job losses will be much wider, cutting deeply into retail and customer service, impacting professional services like accounting, legal, and much more. AI won’t just take jobs. Its impacts on all industries will create new opportunities for software engineers and data scientists. The rate of job creation, however, will lag far behind that of job erosion.

But it’s not all bad! AI is a massive economic catalyst. The economy will grow and goods will be affordable. We’re going to have to adjust to a fundamental disconnect between labor and economic output. This won’t be easy. The equitable distribution of the fruits of this paradigm shift will dominate the social and political conversation of the next 5-15 years. And if I’m right more than wrong in this post, basic income will happen (if only after much kicking and screaming by many). We’ll be able to afford it. Not just that — most will enjoy a better standard of living than today while also working less.

I might be wrong. The experts might be wrong. You might think I’m crazy (let’s discuss in the comments). But independent of specific outcomes, I hope we can agree that we’re on the precipice of another technological revolution and these are exciting times!

When I was in graduate school, I designed a construction site of the future. It was in collaboration with Texas Instruments in the late 90s. The big innovation, at the time, was RFID (radio-frequency identification). Not that RFID was new. In fact, it has been around since World War II where it was used to identify allied planes. After the war, it made its way into industry through anti-theft applications. In the 80s, a group of scientists from Los Alamos National Laboratory formed a company using RFID for toll payment systems (still in use today). A separate group of scientists there also created a system for tracking medication management in livestock. From here it made its way into multiple other applications and began to proliferate.

RFID got a boost in 1999 when two MIT professors, David Brock and Sanjay Sarma, reversed the trend of adding more memory and more functionality to the tags and stripped them down to a low-cost, very simple microchip. The data gleaned from the chip was stored in a database and was accessible via the web. This was right at the time that the wireless web emerged (good old CDPD) as well, which really bolstered widespread adoption. This also precipitated funding from large companies, like Procter & Gamble and Gillette (this was before P&G acquired Gillette), to institute the Auto-ID Center at MIT, which furthered the creation of standards and cemented RFID as an invaluable weapon for companies, especially those with complex supply chains.

OK, as you can tell, RFID has a special place in my heart. I even patented the idea of marrying RFID with images, but that is another story. Anyway, up to this point you’ve probably decided this is a post about RFID, but it’s not. It’s a post about RFID to IoT (Internet of Things). The term Internet of Things (IoT) was first coined by British entrepreneur Kevin Ashton in 1999 while working at Auto-ID Labs, specifically referring to a global network of objects connected by RFID. But RFID is just one type of sensor and there are numerous sensors out there. I like this definition from Wikipedia:

In the broadest definition, a sensor is an electronic component, module, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics, frequently a computer processor. A sensor is always used with other electronics, whether as simple as a light or as complex as a computer.

Sensors have been around for quite some time in various forms. The first thermostat came to market in 1883, and many consider this the first modern, manmade sensor. Infrared sensors have been around since the late 1940s, even though they’ve really only recently entered the popular nomenclature. Motion detectors have been in use for a number of years as well. Originally invented by Heinrich Hertz in the late 1800s, they were advanced in World War II in the form of radar technology. There are numerous other sensors: biotech, chemical, natural (e.g. heat and pressure), sonar, infrared, microwave, and silicon sensors to name a few.

According to Gartner, there are currently 8 Billion IoT Units worldwide and there will be 20 Billion by 2020. Suffice to say there are numerous sources of data to track “things” within an organization and throughout supply chains. There are also numerous complexities to managing all of these sensors, the data they generate, and the actionable intelligence that is extracted and needs to be acted on. Some major obstacles are networks with time delays, switching topologies, density of units in a bounded region, and metadata management (especially across trading partners and customers). These are all challenges we at BigR.io have helped customers work through and resolve. A great example is our Predictive Maintenance offering.

Let’s get back to RFID to IoT. There is a tight coupling because the IP address of the unit needs to be supplemented with other information about the thing (for example, condition, context, location, security, etc). RFID and other sensors working in unison can provide this supplemental information. This marriage enables advanced analytics including the ability to make predictions. Large sensor networks must be properly architected to enable effective sensor fusion. Machine Learning helps take IoT to the next level of sophistication for predictions and automation for fixes and can help figure out when and where every ”thing” fits in the ecosystem that they play in. A proper IoT agent should monitor the health of the systems individually and in relation to other parts. Consensus filters will help in the analysis of the convergence, noise propagation reduction, and ability to track fast signals.

There are other factors that play into why IoT is so hot right now: the whole Big Data phenomenon has lent itself to the growth, endless compute power has served as a foundation by which advanced applications using IoT can run, and the Machine Learning libraries have been democratized by companies like Google, Facebook, and Microsoft. In general, Machine Learning thrives when mounds of data are available. However, storing all data is cost prohibitive and there is so much data being generated that most companies opt to only store bits of critical data. Some companies only store the data to freeze it from failures. You may not want to store all data, but you don’t want to lose “metadata,” or the key information that the data is trying to tell you, whether from the sensor itself or indirectly through neighboring sensors. I had a stint where we supported Federal and Defense-related sensor fusion initiatives and I picked up a handy classification of data:

  • Data
  • Information
  • Knowledge
  • Intelligence

The flow is moving the metadata being generated down the line into information → knowledge → intelligence that can be acted upon.

There also exists the ABCs of Data Context:

[A]pplication Context: Describes how raw bits are interpreted for use.

[B]ehavioral Context: Information about how data was created and used by real people or systems.

[C]hange Over Time: The version history of the other two forms of data context.

Data context plays a major role in harnessing the power of an IoT network. As we progress to smarter networks, more sophisticated sensors, and artificial intelligence that manages our “things,” the architecture of your infrastructure (enterprise data hub), the cultivation and management of your data flows, and the analytics automation that rides on top of everything become critical for day-to-day operations. The good news is that if this is all done properly, you will reap the rewards of thing harmony (coined here first folks).

Please visit our Deep Learning Neural Networks for IoT white paper for a more technical slant.