Tag Archive for: AI

Urinary tract infections, commonly known as UTIs, usually do not pose serious health risks – when they are detected and treated early. However, when allowed to advance undetected past a certain point, a number of serious adverse outcomes can result from late or misdiagnosis of UTI.

A group of researchers from the University of Edinburgh and Heriot-Watt University are developing artificial intelligence and “socially assistive robots” to detect urinary tract UTIs earlier and ensure better patient outcomes.

UTIs affect 150 million people worldwide annually, making it one of the most common types of infection. When diagnosed early, it can be treated with antibiotics. If left untreated, UTIs can lead to sepsis, kidney damage, and even loss of life.

Diagnosis, however, can be difficult with lab analysis, a process taking up to 48 hours, providing the only definitive result. Early signs of a UTI can also be challenging to recognize because symptoms vary according to age and existing health conditions. There is no single sign of infection but a collection of symptoms which may include pain, fever, increased need to urinate, changes in sleep patterns, and tremors.

To address these concerns, the researchers are working with two industry partners from the care sector who are helping the scientists to develop machine learning methods and interactions with socially assistive robots to support earlier detection of potential infections and raise an alert for investigation by a clinician.

The project will gather continual data about the daily activities of individuals in their homes via sensors that could help spot changes in behavior or activity levels and trigger an interaction with a socially assistive robot. Known as “FEATHER,” the AI platform will combine and analyze these data points to flag potential infection signs before an individual or caretaker is even aware that there is a problem. Behavioral changes that could indicate UTI include changes in walking pace, increased frequency of urination, changes in cognitive function, or a change in sleep patterns, all of which could be noticed and documented by interaction with the assistive robot.

The AI and implementation aspects of the project will be led by Professor Kia Nazarpour, Dr. Nigel Goddard, and Dr. Lynda Webb from the University of Edinburgh. The Human-Robot Interaction aspects will be led by Professor Lynne Baillie, assisted by Dr. Mauro Dragone, from Heriot-Watt University.

Professor Kia Nazarpour, project lead and Professor of Digital Health at the School of Informatics, University of Edinburgh, said, “This unique data platform will help individuals, caretakers, and clinicians to recognize the signs of potential urinary tract infections far earlier, helping to prompt the investigations and medical tests needed. Earlier detection makes timely treatment possible, improving outcomes for patients, lowering the number of people presenting at hospital, and reducing costs to the NHS.”

How BigRio Helps Bring Advanced AI Solutions to Healthcare

Like the FEATHER project, improving disease detection, medical imaging, and diagnostics is an area where AI and machine learning are making one of the technology’s biggest impacts.

BigRio prides itself on being a facilitator and incubator for such advances in leveraging AI to improve diagnostics. In fact, it was my father’s own battle with and eventual death from lung disease that set me on my path to finding ways to use AI to provide earlier detection of serious medical conditions for improved patient outcomes.

Eventually, among our other success stories, we did collaborate with a researcher who is in the process of developing a cognitive digital twin of the human lung. Right now, that technology is being used specifically in the realm of testing inhalers for asthma patients, but like the FEATHER UTI detection tool, it has broader implications for better diagnostics and treatments for COPD and other lung diseases.

We like to think of ourselves as a “Shark Tank for AI.”

If you are familiar with the TV series, then you know that, basically, what they do is hyper-accelerate the most important part of the incubation process – visibility. You can’t get better visibility than getting in front of celebrity investors and a TV audience of millions of viewers. Many entrepreneurs who have appeared on that program – even those who did not get picked up by the sharks – succeeded because others who were interested in their concepts saw them on the show.

At BigRio, we may not have a TV audience, but we can do the same. We have the contacts and the expertise to not only weed out the companies that are not ready, as the sharks on the TV show do but also mentor and get those that we feel are readily noticed by the right people in the biomedical community.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

A cognitive digital twin is an AI-driven representation of a real-world system. The objects being twinned can be mechanical such as vehicles, ships, and airplanes, or biological, such as organs and biological processes. It is the latter that is radically altering pharmaceutical research and may very well change the nature of clinical drug trials forever.

Traditional drug discovery is a long and complex process that can take years and millions and millions of dollars. The long and intensive process of bringing a new drug through all phases of clinical trials and to market starts with recruiting the right candidates and then proceeds through many steps and phases of testing the drugs vs. placebos in the candidates that have been recruited for the trial.

Finding those patients is one of the most time-consuming aspects of the process. But that is all changing thanks to AI and, specifically, cognitive digital twin (CDT) technologies.

Cognitive digital twins behave virtually the same way, statistically, as their physical counterparts, which makes them ideal for the powerful ability of AI to assimilate massive amounts of data and make remarkably accurate predictions.

Digital twins have been used quite effectively for monitoring health and providing preventive maintenance for some very highly complex systems, such as high-performance sports cars to military aircraft.

Now, they are changing the very landscape of drug discovery by modeling perhaps the most complex system of all organs and even complete human beings. For example, digital twins of patients are now being used to find ideal candidates in that all-important recruitment phase of a drug trial. The twin is created using AI algorithms and machine learning to create a “virtual patient” by leveraging data from previous clinical trials and from individual patient records. The model predicts how the patient’s health would progress during the course of the trial.

This kind of CDT technology is also being used to create “virtual patients” who are “stand-ins” for the control group – the ones getting a placebo – in the typical double-blind drug trial protocol. The digital twin patient predicts how that individual patient would react if they were given a placebo, essentially creating a simulated control group for a particular patient. Think of it as splitting yourself into two distinct exact copies of yourself, one given the actual drug and the other given the placebo as a control. This makes for an even more accurate control group than just splitting all those in the trial into two groups as in typical trials, because the control group is now exactly the same as the group getting the drug. The digital twin virtually eliminates any variance between the drug group and the placebo group that could be based on genetic, physical, and lifestyle differences between the two groups.

Furthermore, replacing or augmenting control groups with digital twins could help patient volunteers as well as researchers. Most people who join a trial do so, hoping to get a new drug that might help them when already-approved drugs have failed. But there’s a 50/50 chance they’ll be put into the control group and won’t get the experimental treatment. Replacing control groups with digital twins could mean more people have access to experimental drugs.

In Silico Research

And finally, another area where CDT technology is making a tremendous difference in drug discovery is in the emerging area of “in silico” research, where digital twinning is used to create so-called “organs on a chip.” Digital twins of the human heart, lungs, and other organs are already being used to hyper-accelerate drug discovery.

One of the promises of CDT is to make complete in silico drug trials from start to finish a reality. Early successes occurring now are paving the way to a time in the not-so-distant future where no humans, nor animals, not even a single living cell will be required for drug discovery — and yet the impact of any given therapeutic or treatment option on a targeted organ, system or even an individual cell can be perfectly charted.

Citadel and AI for Drug Discovery

AI and machine learning are having a tremendous impact on healthcare in America, from streamlining hospital operations, to improved diagnostics and more intuitive telemedicine applications. However, AI’s greatest impact will likely be in the way digital twins and other AI solutions are revolutionizing pharmaceutical research.

To that end, Citadel Discovery was launched in 2021 with the purpose of giving a kind of “open access” to the data and technology that will drive the future of pharma research streamlining and lowering the costs of drug discovery and biological research.

The costs of drug discovery continue to rise, with current estimates exceeding $2 Billion. Not to mention that bringing a drug successfully through all clinical trial phases takes, on average, 10-12 years in research and development. Artificial intelligence and machine learning in drug discovery hold the key to reducing these costs and timelines.

Rohit Mahajan is the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

Citadel Discovery is dedicated to leveraging AI and MI for the purpose of democratizing access to the data and technology that will drive the future of biological exploration, drug discovery, and health technologies. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

A digital twin is a digital representation of a physical object or system. Cognitive digital twin (CDT) technology uses AI to create highly sophisticated “twins” or models to accurately mimic a real-world object. That object could be a car, an airplane, ship, or helicopter. Increasingly in the healthcare field, CDT has been used to “twin” organs and biological systems for research, diagnostics, and health maintenance. Digital twins have even been built to represent and understand regions and cities.

Now, in perhaps one of the most complex and ambitious uses of CDT to date, the National Oceanographic and Atmospheric Association (NOAA) has announced its plans to create a digital twin of the planet Earth to track global warming and other environmental issues!

The agency has partnered with NVIDIA and Lockheed Martin to construct the Earth Observation Digital Twin, an inaugural prototype of Earth modeled on real-time geophysical data sourced from satellites and ground stations.

According to NOAA, the replica Earth, or EODT, will be designed as a two-dimensional computer program. Some potential climate impacts the EODT can display include global glacier melting, drought impacts, wildfire prediction, and other climate change events.

“We’re providing a one-stop shop for researchers, and for next-generation systems, not only for current, but for recent past environmental data,” Lockheed Martin Space Senior Research Scientist Lynn Montgomery said. “Our collaboration with NVIDIA will provide NOAA a timely, global visualization of their massive datasets.”

Emerging technologies like artificial intelligence play a key role in EODT’s data processing and modeling. Matt Ross, a senior manager at Lockheed Martin, said that the sheer volume and diversity of data from NOAA sources that program EODT make it challenging to gauge accurate insights from the application without the use of AI.

“This data happens to come in different formats, because the data are so diverse, because it’s measuring so much different stuff,” Ross said. “It arrives in different formats that, absent technology, it could make it very, very difficult to gain the insights that NOAA needs to make decisions.”

Leveraging the power of AI and machine learning algorithms will help NOAA researchers assimilate and identify the incoming data, as well as detect any anomalies. Ross added that the combined power of AI and ML data processing is key to Lockheed and NVIDIA’s “digital twin” programming technology in that it can accurately model past data as well as future realities, all in an intractable and real-time interface.

While both NVIDIA and Lockheed intend for the final deliverable to be a two-dimensional user experience, additional capabilities may be added in the future.

“The fact that we can pull all this data into a single sort of format, in a single viewpoint, allows you to have real-time or near real time, access to it, and the interdependencies of that data to make real-time decisions,” said Dion Harris, the lead product manager of accelerated computing at NVIDIA.

Other Industries Benefiting from Digital Twin Technologies

In addition to paving the way for a digital twin of the planet itself to monitor climate change, AI and digital twinning are revolutionizing many other industries, chief among them transportation. Just as CDT can monitor the health of the Earth, cognitive digital twin technologies are proving invaluable for predictive maintenance of high-value military vehicles, airplanes, ships, and even passenger cars. Digital twin solutions like those developed by CarTwin extend the lifespan of cars and other vehicles by monitoring the vehicle’s “health” through its “digital twin.”

Basically, CarTwin can provide diagnostic and predictive models for all vehicle systems for which data is available (either directly or indirectly) onboard the vehicle.

Virtually any part of the vehicle that has sensors or that sensors can be developed for can be “twinned.” These data sets are then enhanced and augmented with design and manufacturing data that is already available by the OEM.

Primarily designed for use in fleets of vehicles, in combination with powerful AI models, CarTwin predicts breakdowns, monitors and improves performance, measures and records real-time greenhouse gas emissions, which reduces expensive maintenance costs and avoids lost revenue associated with fleet downtime.

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

One of the country’s leading comprehensive cancer centers has just announced that it is tapping an artificial intelligence-powered drug discovery platform to aid its development of novel cancer therapeutics.

The center is working with AI developer Exscientia to aid its discovery of new cancer drugs. According to an MD Anderson press release, the collaboration will start with “jointly identified oncology targets and then employ Exscientia’s AI platform to design small-molecule drugs.” The resulting candidates will be examined by MD Anderson’s Therapeutics Discovery division and its Institute for Applied Cancer Science, and the most promising prospects will potentially advance into clinical proof-of-concept studies at the Houston cancer center.

MD Anderson’s drug discovery institute, known as IACS, and the cancer center’s other teams have to date helped graduate at least five small-molecule and antibody-based therapies into early-stage clinical testing, including through collaborations with Bristol Myers Squibb, Ionis, Astellas and more.

The financial terms of the joint venture were not disclosed; however, in their announcement, Exscientia and MD Anderson said they will “jointly contribute to and support each program” that is targeted for development.

Exscientia, has been a leader in AI-driven design of large-molecule drugs and antibody therapies. In addition to partnering with facilities such as MD Anderson and well-known pharmaceutical companies, earlier this year, Exscientia found itself with the rights to develop a drug of its own. After wrapping up an AI collaboration with Bayer to develop targets in cancer and cardiovascular disease, the two companies announced that Exscientia would retain the option to develop one of the two targets.

Citadel and AI for Drug Discovery

Similar to the partnership between Exscientia and MD Anderson, Citadel Discovery is sharing knowledge and expertise to better enable drug discovery by providing access to data, models, and results discounted for academics and by developing a sharing platform and an expanded list of drug targets.

Citadel was launched in 2021 with the purpose of giving a kind of “open access” to the data and technology that will drive the future of pharma research streamlining and lowering the costs of drug discovery and biological research.

The costs of drug discovery continue to rise, with current estimates exceeding $2 Billion. Not to mention that bringing a drug successfully through all clinical trial phases takes, on average, 10-12 years in research and development. Artificial intelligence and machine learning in drug discovery hold the key to reducing these costs and timelines.

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

As AI continues to advance in this digital age, in what seems the ever-increasing conflict between man and machine, the one bastion where “man” held out hope of dominance was the arts. However, even that may just be wishful thinking, as recently, there has been an increasing amount of “art” in artificial intelligence!

Some interesting and perhaps somewhat disturbing cases, in point, have been the introduction of “text-to-image” AI tools such as Midjourney and Stable Diffusion.

These programs have become wildly popular, with their remarkable ability to take language prompts from human users and translate them into original images. Another such platform, Dall-E, is now producing 2 million images a day, including some uncannily realistic creations as well as some surreal abstracts inspired by human’s stated feelings.

What does this mean for the future of AI, AI startup opportunities, and, more importantly, to human artists and the one field that they had hoped would hold the line between man and machine?

When you use a text-to-image AI tool like Midjourney, you type in a phrase that describes what you want — for example, “A father feeling grief in the style of Van Gogh.”

In less than a minute, Midjourney produces four original images it thinks may match the prompt. You can then pick the image that you like best, create new variations based on that image, and refine them from there.

The private sector is already starting to realize the program’s potential, said David Holz, the founder of Midjourney, speaking to Marketplace.

“Business owners, game designers, people in the movie industry are using it,” he said.

About 30% of Midjourney’s users are professionals who use it primarily to brainstorm for commercial projects, Holz said, adding that tech like Midjourney will change how artists work.

But smart employers won’t use it to replace them. At least, that is the hope among the creative community.

“Some people will see this as an opportunity to cut costs and have the same quality,” Holtz said. “They will fail,” he added.

Artists who are using these kinds of programs do not feel they are “cheating” any more than an architect or engineer who uses CAD cam. They are still channeling their creativity; they are just using an advanced tool to do so.

While purists remain and have been raising alarms on social media about AI replacing human creativity, those artists that have embraced the technology say they are still creating “art” it is simply art that may belong in a very different category, just like computer graphic art differs from oil painting, but they are both undeniable art, created by artists.

What do you think?

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

The automotive sector has been one of the hardest industries hit by and since the pandemic. Rampant inflation, a chip shortage, and other global economic factors have new car demand low and prices high. Which all means that automotive consumers are choosing to keep their older cars longer.

AI is helping car owners do exactly that by providing solutions that can help drivers and repair shops diagnose current problems on their cars and anticipate future repair and service needs – which can all extend the lifespan of their vehicles.

However, whereas most of the AI-driven auto diagnostic solutions that are becoming available are “apps” that are designed to be used by the individual consumer or repair shop, CarTwin is a machine learning/digital twin technology that has been designed for fleet operators and auto manufacturers by allowing the use of AI for predictive maintenance on a much grander scale.

Having proved itself in the field with a well-known German manufacturer of high-performance vehicles, CarTwin now serves the shifting needs of the auto industry with an AI-driven solution that enables a diverse set of automotive use cases. CarTwin creates unique innovations and unique opportunities by connecting the physical and digital worlds, which can provide real-time operational awareness of vehicle, component, and manufacturing performance.

“The automotive industry has witnessed an incredible transformation over the last decade. CarTwin represents the next chapter in its digital disruption. We’re leveraging artificial intelligence, industry expertise, and easy-to-use tools to provide the most complete Digital Twin technology blueprint available today,” says Corey Thompson, CEO of CarTwin.

Basically, CarTwin can provide diagnostic and predictive models for all vehicle systems for which data is available (either directly or indirectly) onboard the vehicle.

“As an example, for the project with our OEM, we have already built models for the suspension and battery systems and are continuing to add additional systems as we move along. Our POC project will add the ignition system, fuel system, and turbocharging system,” says Thompson.

Virtually any part of the vehicle that has sensors or that sensors can be developed for can be “twinned.” These data sets will be augmented with design and manufacturing data that is already available by the OEM.

CarTwin obtains its data from the “CAN Bus,” which is basically the “communication network” on a vehicle that enables data acquisition also in real-time. CarTwin utilizes the data from the CAN Bus, as well as historical data about the inspection, repair, and parts replacement from the service centers, to build our models. This means that it can be used in any vehicle that is newer than 1996 when the CAN Bus started to be used.

The platform records and utilizes the data streaming through fleet vehicles. In combination with powerful AI models CarTwin predicts breakdowns, monitors and improves performance, measures and records real time greenhouse gas emissions, reduces expensive maintenance costs and avoids lost revenue associated with fleet downtime. Insights from CarTwin help reduce the carbon footprint and it benefits corporate ESG (Environmental, Social and Governance) objectives

“Most importantly, says Thompson, “our solutions require little to no infrastructure improvements for our customers to experience significant competitive advantages, we have devices for purchase that simply plug into the vehicle OBD (onboard computer) ports.”

“Additionally, CarTwin provides carbon footprint intensity and reporting to meet corporate ESG objectives, Thompson adds. Our CarbonLess tool can identify how and where you can save fuel. And fleet operators can use CarbonLess to monitor and identify anomalous CO2 and NOx emissions.

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Artificial Intelligence has been promising to revolutionize healthcare and several other industries for a long time — and by all evidence, the revolution, while still in its infancy, has arrived. If you are not sure, just ask Siri!

Seriously, AI and machine learning are already having major impacts across many industries, not the least of which is healthcare, where AI is already radically shifting the way healthcare is delivered, streamlining operations, improving diagnostics, and improving outcomes.

However, not surprisingly, the growing ubiquity of AI is raising some concerns about privacy and other issues in the way that users interact with AI algorithms. So much so that the Biden administration has recently launched a blueprint for an “AI Bill of Rights” to help ensure the ethical use of AI. Not surprisingly, it is modeled after the sort of patient “bills of rights” people have come to expect as they interact with doctors, hospitals, and other healthcare professionals.

The blueprint outlines five basic principles:
1. Safe and effective systems
2. Algorithmic bias protections
3. Data privacy
4. Transparency and explanation
5. Human alternatives and backups

These basic principles are meant to provide a framework or “guidance” for the US government, tech companies, researchers, and other stakeholders, but for now, it is important to point out that the blueprint represents nonbinding recommendations and does not constitute regulatory policy.

Such regulations will undoubtedly be taken up by Congress in the not-too-distant future. But for now, the guidelines have been designed to apply to AI and automated tools across industries, including healthcare, and are meant to begin the dialogue around a much-needed larger conversation on the ethical use of AI.

The core concept behind the five guiding principles is for Americans to feel safer as they increasingly interact with AI, particularly in the healthcare setting. With such guidelines in place they can feel confident that they are being shielded from harmful or ineffective systems; that they will not face bias or inequities caused by AI; and they will be protected from abusive data practices via built-in safeguards and transparencies over how their data is used.

The fifth principle is an important one. It ensures that Americans receive a complete understanding of how the AI works and is being used and offers the option to “opt-out” and interact with a human interface instead, where practical and appropriate.

“Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms,” the document concludes. “This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities, and access.”

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

October is the month for all things spooky, so it’s no wonder that The 2022 Nobel Prize in Physics has been awarded to three scientists whose work provided the groundbreaking “spooky at a distance” relationship in quantum mechanics.

While it is an odd phenomenon, there is nothing “supernatural” about the discovery of “spooky at a distance” behavior of particles. The theory, as proven by the work of the scientists receiving the awards, refers to the way that particles once “bound” together at the quantum level, will still behave as if they were bound, even when they are separated over great distances.

John F. Clauser, Alain Aspect, and Anton Zeilinger won the 10 million Swedish krona ($915,000) prize for “experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science,” the Royal Swedish Academy of Sciences announced on Tuesday Oct. 4.

Albert Einstein who was aware of the phenomenon dubbed it “spooky action at a distance.”

This concept of “quantum entanglement” paves the way for such theoretical applications as teleportation and forms the very real basis for quantum computing. It is this “spooky” behavior of entangled particles that makes quantum computers orders of magnitude more powerful than even the most powerful supercomputers in use today.

A quantum computer in used by Google is said to be 100 million times faster than any of today’s systems.

What is a Quantum Computer?
The main difference between quantum computers and conventional computers is that they do not store and process information in the bits and bytes that we are familiar with, but rather in something else entirely, known as quantum bits, or “qubits.”

All conventional computing comes down to streams of electrical or optical pulses representing 1s or 0s. Everything from your tweets and e-mails to your iTunes songs and YouTube videos are essentially long strings of these binary digits.

Qubits, on the other hand, are typically made up of subatomic particles such as electrons or photons. The very same photons that were involved in the trio of scientists Nobel Prize winning experiments.

Qubits leverage quantum entanglement,” something that Einstein himself called “spooky action at a distance.”

A simple way of understanding “entanglement” is it is an interdependence based on a long an intimate relationship between the two particles, like a child who goes away to college across the country, but still is “dependent” on the support of his or her parents.

In quantum computing, entanglement is what accounts for the nearly incomprehensible processing power and memory of quantum computers. In a conventional computer, bits and processing power are in a 1:1 relationship – if you double the bits, you get double the processing power, but thanks to entanglement, adding extra qubits to a quantum machine produces an exponential increase in its calculation ability.

Quantum computing is still very much an emerging technology with large scale and practical applications still a way off. However, the technology is steadily graduating from the lab and heading for the marketplace. In 2019, Google announced that it had achieved “quantum supremacy,” IBM has committed to doubling the power of its quantum computers every year, and numerous other companies and academic institutions are investing billions toward making quantum computing a commercial reality.

Quantum computing will take artificial intelligence and machine learning to the next level. The marriage between the two is an area to pay very close attention to for startups as well as for where Big Tech will be going over the next five to ten years.

Zeilinger, 77, professor emeritus at the University of Vienna, said during a press conference about the award, “It is quite clear that in the near future we will have quantum communication all over the world.”

Kudos to the Royal Swedish Academy for recognizing the groundbreaking work of the gentlemen that have literally opened the door into another world and the unbridled potential of artificial intelligence and information technology.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Artificial Intelligence has been promising to revolutionize healthcare for quite some time; however, one look at any modern hospital or healthcare facility, and it is easy to see that the revolution is already here.

In almost every patient touchpoint AI is already having an enormous impact on changing the way healthcare is delivered, streamlining operations, improving diagnostics, and improving outcomes.

Although the deployment of AI in the healthcare sector is still in its infancy, it is becoming a much more common sight. According to technology consulting firm, Gartner, healthcare IT spending for 2021 was a hefty $140 billion worldwide, with enterprises listing “AI and robotic process automation (RPA)” as their lead spending priorities.

Here, in no particular order or importance, are seven of the top areas where healthcare AI solutions are being developed and currently deployed.

1. Operations and Administration
A hospital’s operation and administration expenses can be a major drain on the healthcare system. AI is already providing tools and solutions that are designed to improve and streamline administration. Such AI algorithms are proving to be invaluable for insurers, payers, and providers alike. Specifically, there are several AI programs and AI healthcare startups that are dedicated to finding and eliminating fraud. It has been estimated that healthcare fraud costs insurers anywhere between $70 billion and $234 billion each year, harming both patients and taxpayers.

2. Medical Research
Probably one of the most promising areas where AI is making a major difference in healthcare is in medical research. AI tools and software solutions are making an astounding impact on streamlining every aspect of medical research, from improved screening of candidates for clinical trials, to targeted molecules in drug discovery, to the development of “organs on a chip” – AI combined with the power of ever-improving Natural Language Processing (NLP) is changing the very nature of medical research for the better.

3. Predictive Outcomes and Resource Allocation
AI is being used in hospital settings to better predict patient outcomes and more efficiently allocate resources. This proved extraordinarily helpful during the peak of the pandemic when facilities were able to use AI algorithms to predict upon admission to the ER, which patients would most benefit from ventilators, which were in very short supply. Similarly, a Stanford University pilot project is using AI algorithms to determine which patients are at high risk of requiring ICU care within an 18 to 24 hours period.

4. Diagnostics
AI applications in diagnostics, particularly in the field of medical imaging, are extraordinary. AI can “see” details in MRIs and other medical images far greater than the human eye and, when tied into the enormous volume of medical image databases, can make far more accurate diagnoses of conditions such as breast cancer, eye disease, heart, and lung disease and so much more. AI can look at vast numbers of medical images and then identify patterns in seconds that would take human technicians hours or days to do. AI can also detect minor variations that humans simply could not find, no matter how much time they had. This not only improves patient outcomes but also saves money. For example, studies have found that earlier diagnosis and treatment of most cancers can cut treatment costs by more than 50%.

5. Training
AI is allowing medical students and doctors “hands-on training” via virtual surgeries and other procedures that can provide real-time feedback on success and failure. Such AI-based training programs allow students to learn techniques in safe environments and receive immediate critique on their performance before they get anywhere near a patient. One study found that med students learned skills 2.6 times faster and performed 36% better than those not taught with AI.

6. Telemedicine
Telemedicine has revolutionized patient care, particularly since the pandemic, and now AI is taking remote medicine to a whole new level where patients can tie AI-driven diagnostic tools through their smartphones and provide remote images and monitoring of changes in detectable skin cancers, eye conditions, dental conditions and more. AI programs are also being used to remotely monitor heart patients, diabetes patients, and others with chronic conditions and help to ensure they are complying with taking their medications.

7. Direct treatment
In addition to adding better clinical outcomes with improved diagnostics and resource allocation, AI is already making a huge difference in the direct delivery of treatments. One exciting and extremely profound example of this is robotic/AI-driven surgical procedures. Minimally invasive and non-invasive AI-guided surgical procedures are already becoming quite common. Soon, all but some of the most major surgeries, such as open heart surgeries, can and will be done as minimally invasive procedures, and even the most complex “open procedures” will be made safer, more accurate, and more efficient thanks to surgical AI and digital twins of major organs such as lungs and the heart.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.
BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Sometimes I get to thinking that Alexa isn’t really my friend. I mean sure, she’s always polite enough (well, usually, but it’s normal for friends to fight, right?). But she sure seems chummy with that pickle-head down the hall too. I just don’t see how she can connect with us both — we’re totally different!

So that’s the state of the art of conversational AI: a common shared agent that represents an organization. A spokesman. I guess she’s doing her job, but she’s not really representing me or M. Pickle, and she can’t connect with either of us as well as she might if she didn’t have to cater to both of us at the same time. I’m exaggerating a little bit – there are some personalization techniques (*cough* crude hacks *cough*) in place to help provide a custom experience:

  • There is a marketplace of skills. Recently, I can even ask her to install one for me.
  • I have a user profile. She knows my name and zip code.
  • Through her marketplace, she can access my account and run my purchase through a recommendation engine (the better to sell you with, my dear!)
  • I changed her name to “Echo” because who has time for a third syllable? (If only I were hamming this up for the post; sadly, a true story)
  • And if I may digress to my other good friend Siri, she speaks British to me now because duh.

It’s a start but, if we’re honest, none of these change the agent’s personality or capabilities to fit with all of my quirks, moods, and ever-changing context and situation. Ok, then. What’s on my wishlist?

  • I want my own agent with its own understanding of me, able to communicate and serve as an extension of myself.
  • I want it to learn everything about how I speak. That I occasionally slip into a Western accent and say “ruf” instead of “roof”. That I throw around a lot of software dev jargon; Python is neither a trip to the zoo nor dinner (well, once, and it wasn’t bad. A little chewy.) That Pickle Head means my colleague S… nevermind. You get the idea.
  • I want my agent to extract necessary information from me in a way that fits my mood and situation. Am I running late for a life-changing meeting on a busy street uphill in a snowstorm? Maybe I’m just goofing around at home on a Saturday.
  • I want my agent to learn from me. It doesn’t have to know how to do everything on this list out of the box – that would be pretty creepy – but as it gets to know me it should be able to pick up on my cues, not to mention direct instructions.

Great, sign me up! So how do I get one? The key is to embrace training (as opposed to coding, crafting, and other manual activities). As long as there is a human in the loop, it is simply impossible to scale an agent platform to this level of personalization. There would be a separate and ongoing development project for every single end user… great job security for developers, but it would have to sell an awful lot of stuff.

To embrace training, we need to dissect what goes into training. Let’s over-simplify the “brain” of a conversational AI for a moment: we have NLU (natural language understanding), DM (dialogue management), and NLG (natural language generation). Want an automatically-produced agent? You have to automate all three of these components.

  • NLU – As of this writing, this is the most advanced component of the three. Today’s products often do incorporate at least some training automation, and that’s been a primary enabler that leads to the assistants that we have now. Improvements will need to include individualized NLU models that continually learn from each user, and the addition of (custom, rapid) language models that can expand upon the normal and ubiquitous day-to-day vocabulary to include trade-specific, hobby-specific, or even made-up terms. Yes, I want Alexa to speak my daughter’s imaginary language with her.
  • DM – Sorry developers, if we make plugin skills ala Mobile Apps 2.0 then we aren’t going to get anywhere. Dialogues are just too complex, and rules and logic are just too brittle. This cannot be a programming exercise. Agents must learn to establish goals and reason about using conversation to achieve those goals in an automated fashion.
  • NLG – Sorry marketing folks, there isn’t brilliant copy for you to write. The agent needs the flexibility to communicate to the user in the most effective way, and it can’t do that if it’s shackled by canned phrases that “reflect the brand”.

In my experience, most current offerings are focusing on the NLU component – and that’s awesome! But to realize the potential of MicroAgents (yeah, that’s right. MicroAgents. You heard it here first) we need to automate the entire agent, which is easier said than done. But that’s not to say that it’s not going to happen anytime soon – in fact, it might happen sooner than you think.  

Echo, I’m done writing. Post this sucker.

Doh!