Authored By: Jeff Harris, Vice President, Global Corporate and Portfolio Marketing, Keysight Technologies
At the core of every Artificial Intelligence (AI) algorithm are three basic ingredients: (1) the ability to measure, (2) knowing how much of what you measure needs to be processed and, of course, (3) the ability to process more than one input at a time.
To what depth a system can measure can be thought of as its potential. Determining what aspects of those measurements must be sent to the processor can be thought of as delivering that potential. Finally, knowing how to combine the salient parts of those measurements in the correct proportions, known as “sensor fusion,” is the key to exploring an algorithm’s IQ or reasoning potential. Augment that sensor fusion with a feedback loop and the algorithm will have the ability to check and course-correct its logic, a necessary ingredient in machine learning.
These three attributes are the key to understanding the depth of an AI’s unique power. And, like many things, the more you cultivate and calibrate these foundational elements, the better the AI algorithm will perform in the long term. Now that we understand the three areas to explore let’s dive into the first component, measurement depth, and how it’s critical to the foundation of building a robust, high-performing AI algorithm.
Measurement Depth
Metrology is the study of measurement science and measurement depth plays a crucial role in building a robust algorithm. The Gagemaker’s Rule, or 10:1 rule, states that a measurement device must be 10x more precise than the desired measurement. The reason that measurement depth is so critical is that it determines the possible level of precision and sets the algorithm’s maximum potential. Therefore, the more precision you have in any given measurement, the greater the AI algorithm’s potential.
“To what depth a system can measure can be thought of as its potential. Determining what aspects of those measurements must be sent to the processor can be thought of as delivering that potential. Finally, knowing how to combine the salient parts of those measurements in the correct proportions is the key to exploring an algorithm’s IQ or reasoning potential.”
Metrology focuses on the deep understanding of a particular measurement. That measurement can be as simple and distinct as voltage, ground or temperature or as multi-modal as the functioning of aircraft control surfaces, or as complex as maximising throughput on a manufacturing assembly line. Whether you are measuring a single parameter or several, the depth of each measurement determines the level of programmability that’s possible. For instance, measuring a 3 Volt system to 1/10th of a volt is not as insightful as measuring to 1/1000th of a volt. Depending on the system that voltage is powering, the extra precision may be critical for battery life or maybe a distraction. Maximising the potential of any algorithm requires matching the entire end-to-end measurement needs to the needed depth. This is true no matter what’s being measured, even data systems, which may not be as immediately intuitive, so let’s look at one of those examples.
How to Optimize Measurement
Enterprise IT stacks are now a complex web of interconnected data systems, each exchanging information aimed at tuning an organisation’s operations. These technology stacks include an array of software such as CRM, ERP, databases, order fulfillment, each with unique data formats and custom application programming interfaces (APIs). According to Salesforce, the average company has over 900 applications in its tech stack, many of them cloud-based and all of them experiencing software updates that can have ripple impacts. Identifying and isolating problems, much less optimising performance across multiple intersecting software applications, is akin to finding a needle in a collection of interconnected haystacks.
“If you cannot measure it, you cannot improve it.” – Lord Kelvin
Each software application in a tech stack has a different sponsor in an enterprise—finance, human resources, sales, marketing, supply chain—and that primary org’s considerations are top of mind for IT. Every enterprise has custom workflows and integrations with numerous applications and backend systems, and user journeys span various paths and are rarely linear. Therefore, even if two enterprises used identical applications in their tech stack, mapping all the exchange points and validating the end-to-end operation would be unique. If there were ever an application in need of AI, this would be it. The measurements, in this case, could be the intersystem data input points, the intrasystem data exchange points, and the data display points.
Understanding how an AI algorithm would operate in a system like this would start with understanding how it measures points data in three key areas:
- Measuring how users interface with the application, regardless of the operating system, which in some cases involves employing robotic process automation (RPA) when button pushes are required
- Measuring the data exchanges between and command APIs that link the systems in a complex technology stack to ensure they are occurring correctly
- Measuring on-screen information across omni platforms (desktops and mobile) such as images, text and logos as a human would to see how they render
Evaluating the measurement efficacy starts with its ability to measure regardless of operating system, software versions, devices, or interface mechanisms. The more conditions under which the AI cannot measure, the less impactful it will be in operation.
Conclusion
Whenever you assess the potential of anything, start with the foundation. At the foundation of every AI system is its ability to measure. The more it can measure, the more impactful it has the potential of being. Look at what it is capable of measuring and, more importantly, where it is not capable. Limited sensing results in limited AI algorithm potential. The old adage from Lord Kelvin stands true today that “if you cannot measure it, you cannot improve it.” To understand the true power of any AI, make sure to start by analyzing its measurement breadth and depth.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)