If the BBM Blackberry ringtone can bring a sense of nostalgia, then the “technology company” era may be within memory. Buzzwords come and go, and the newest is a three-way battle between: AI, algorithmic, or machine-learning. In many cases, the three words are used near inter-changeably, like flinging every coin into a fruit machine during an elevator pitch in the hopes of an investment jackpot.
As examined in our previous post about AI hallucinations (which can be read here), there are many aspects of AI that are still in progress. Using technology can feel authoritative, but there is still a need for caution as it its usage, and the weight given to its outputs.
Artificial intelligence (AI) has been depicted in fiction, philosophy and many fevered articles with the rise of the large language models (LLM), like ChatGPT, that have been free to access for the general public. Though multiple comparisons to the Terminator Skynet franchise have been made, there has been some serious confusion about what is called AI; and what makes it different than what came before it. For ChatGPT to run, algorithms had to previously do the walking, and the calculations.
To really begin at the beginning, the word algorithm derives from the same name and base as the word algebra, coming from al-Khwârizmî the ninth-century scientist who is best known for his textbooks on algebra and arithmetic. His name when rendered to Latin, was Algorithmi, which has gone on to trouble many a spell-check.
As with any new technology, and one as disparate as algorithm’s use in computing, the definition will undergo a certain amount of flux. Then there is the common parlance use of the term. This sometimes attributes to an algorithm a sense of agenda, or even intention, which needs to be managed or evaded. For example, SEO term stuffing websites to gain more traffic and searches from Google’s search engine, or the right set of hashtags for an influencer to gain more ad revenue on a brand deal.
According to the Cambridge Dictionary an algorithm is:
“a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem”
The Merriam Webster definition is a little more descriptive and says:
“a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation.
broadly : a step-by-step procedure for solving a problem or accomplishing some end”
In a contest of the definitions, it is telling that Practical Law maintains two definitions of algorithm, one for the USA, and one for the UK.
UK: A set of computational rules to be followed in calculations and in other problem solving. Computers use algorithms to list the detailed instructions for carrying out an operation.
USA: A defined sequence of instructions to be followed by a computer typically to solve a problem or calculation.
To think more critically about what these definitions are describing, and how this function is to operate is to move more down the path of mathematics and computer sciences. In simplified terms an algorithm is a specific (finite) set of instructions, which can be different kinds of computations or decisions, that is applied to sets of data or actions (this can be numbers, text or images) and creates a result or outcome. If the algorithm is well made, this result or outcome will be what was intended in the first place, otherwise it may result in an error.
Algorithms are mainly identified within data processing, where the function and the method of processing is set in a defined language. This can be in natural language, flowcharts, computer programming languages, formal logic or mathematical equations. But it need not always be as simple as 1 + 1 = 2. There are elements where in more advanced “code” the functions can be conditional, such as classical formal logic “if” – “then” statement. Breaking down layers of conditional instructions can be used to process information for multiple outcomes. This is where algorithms can be used effectively for automated systems, where a sequence of decisions are made with the input information, and the output is effectively more data. The simpler and more scalable the algorithm is, the less computing memory it will need to be effective.
The overarching process of using an algorithm means that how the decisions or calculations are applied can result in very different kinds of output. These outputs are categorised by what kind of task the algorithm intends to perform.
A short list of examples by Tech Target demonstrates the many applications algorithms can have:
- Encryption algorithm. This computing algorithm transforms data according to specified actions to protect it. A symmetric key algorithm, such as the Data Encryption Standard, for example, uses the same key to encrypt and decrypt data. If the algorithm is sufficiently sophisticated, no one lacking the key can decrypt the data.
- Greedy algorithm. This algorithm solves optimization problems by finding the locally optimal solution, hoping it is the optimal solution at the global level. However, it does not guarantee the most optimal solution.
- Recursive algorithm. This algorithm calls itself repeatedly until it solves a problem. Recursive algorithms call themselves with a smaller value every time a recursive function is invoked.
- Dynamic programming algorithm. This algorithm solves problems by dividing them into subproblems. The results are then stored to be applied to future corresponding problems.
- Brute-force algorithm. This algorithm iterates all possible solutions to a problem blindly, searching for one or more solutions to a function.
According to IBM, machine learning is a subset of algorithmic application. It is a complex pattern of algorithms which aims to use data to imitate how human learn. This is also a branch of AI, and where the overlaps of how the outcomes of algorithms applied in new ways, with the amount of data and computing power now available is generating outcomes beyond what al-Khwârizmî may have ever dreamed.
In another post we will examine the many definitions and regulatory approaches being taken with AI as its implications begin to be felt internationally.
Please contact Joanna Coombs-Huang if you have any questions or concerns regarding data and AI technologies.
The material in this article is only for general review of the topics covered and does not constitute legal advice. No legal or business decision should be based on its content.
This article is written in English language. Preiskel & Co LLP is not responsible for any translation of all or part of its content into any language.