NATURE-INSPIRED RECOMMENDATION ENGINES



Thought Leadership | Author: Patricio Blasco, Head of Strategy, Digitas APAC


Humans have demonstrated throughout history a quest for inspiration to improve their quality of life. Given that nature had the advantage of millions of years of evolution, it’s logical that we got to benefit from its knowledge and creations. The flight of birds, whales swimming miles and miles underwater, or dolphins using a sonar system for navigation are just some examples of nature-inspired human innovations. More recently, artificial neural networks have taken inspiration from how the biological brain works. 

However, when we analyse the most commonly used recommendation engines and the algorithms that enable them, these appear to be fully human inventions. Mapping out similarities between two people reflected on datasets, and a very simple ‘people like you, like that’ logic is how they tend to operate. And even though we managed to develop powerful data-driven algorithms for recommendation engines, the current approach doesn’t seem to encompass the complexity of the human decision-making process. Fortunately, nature again might be able to show us the right way to do this.

HOW RECOMMENDATION ENGINES WORK?

Today, every major online service uses these types of recommendations with an underlying algorithm to help suggesting relevant content, products, or services. Amazon, Spotify, Netflix, and YouTube, to name a few, rely on these systems to deliver on their promise for a customer-centric experience. The use of data with the power of the recommendation engine is their foundation to drive personalised experiences. 

When we think about personalisation, we visualise these companies recommending content or products that are unique for each user. However, according to research from Salesforce, while 66% of customers expect companies to understand their unique needs and expectations, only 34% feel that they are generally not treated like numbers.

The success of these models resides directly on the quality and quantity of data that later can shape a strategy in real-time by the application of powerful algorithms. As a result, each customer should receive recommendations that individually align with their interests and needs. On the other side, customers usually follow these recommendations because they get to skip altogether a decision-making process, which is energy-consuming, stressful, and leads to decision fatigue. 

This behaviour is related to a psychological state, known as the ‘comfort zone’. When we are in this state, stress levels are low because we are in a familiar environment that is relatable to us. Automating repeating tasks to limit our decisions and embracing familiarity are ways to feel encouraged and relieved. The algorithm-based recommendation engines work under this same logic. Based on data inputs and analysis, they offer ‘extended’ yet ‘familiar’ recommendations to customers, presumably helping to solve the decision dilemma. 

Fundamentally, these models should help customers discover new experiences, content, or products based on their past behaviour and predicting their future interests or needs. In order to achieve this, they analyse other people’s behaviours and data patterns. The more customers use the model, the better it becomes; the better it gets, then more people use it. When done right, the recommendation engines should enable better choices and alternatives.

THE REAL IMPACT OF RECOMMENDATION-ENGINES

The problem is the impact they might have on our individual decision-making process and how influential we are as humans. Using data and analytics to form recommendations may transform how people perceive, experience, and make choices. This can affect how open we are to follow an exploration path for new brands, services, or even content. In summary, customers might delegate their decision-making process trusting that the results would align with their needs and interests.

Research from Netflix showed that 75% of viewer activity came from the recommendations based on their algorithms. Even before we were ruled by content platforms, consumers had shown a tendency to gravitate towards previously ‘validated’ decisions. 

A research study from 2006 built an artificial music market showcasing two groups of participants who needed to predict the success of unknown bands and songs. One group had to make their decisions independently, while the other had additional data showing how many times each song had been already downloaded before. The results showed greater inequality of choices in the second group, leading popular songs to become more popular and unpopular songs left unheard. 

Even the YouTube algorithm works under the same logic. The recommendation system of YouTube sees a user exactly as the history of all the videos they’ve watched on YouTube. This means that each person probably ends up watching similar content as a similar profile based on data. 

Using big data and algorithms is undoubtedly valid, but how biased is this approach in terms of understanding the reasons or desires for anyone to decide? And are these models enhancing our potential to decide within a relevant extended set of options or just dictating them based on the alternatives that fit our profile?

THE REAL-LIFE SCENARIO: WHAT IS HAPPENING? 

According to recent research, customers are more willing to trade their data with brands if this translates to better personalisation. However, this does not always turn into a real-life scenario. Sometimes, brands have limitations on collecting enough data to drive good personalisation strategies and consequently end up formulating predictive rules based on limited information. In principle, personalisation is impossible if brands don’t have the means to understand their customers through data.

As a result, when you link together an intrinsic human response to avoid the stress of decision making with limited access to behavioural data from brands, there’s a risk of ending up with a closed system. This means that instead of expanding our choices, driving discovery and exploration, we might just be limiting ourselves. A self-complacency design that would prevent new brands, products, or content to be seen or known if they are different from the recommended paths.

THE OPPORTUNITY FOR A NEW TYPE OF ALGORITHM

Maybe it’s time we consider new types of algorithm rules that encourage people to get out of their comfort zone, beyond an initial set of familiar recommendations and in the path of discovery. A disruption algorithm that is more aligned with the complexity of our decisions. A model that provides value both to the predictive feature of whether an individual user will like something, and the behaviour which expands the spectrum of recommendations for an entire group. Incorporating a positive value in the algorithm calculations for the discovery process, helping customers to explore beyond the familiar recommendations that feel so easy to choose. 

As usual, there is already a more sophisticated version of it that we can find in nature. Ants tend to be considered among the most intelligent insects, but a single one on its own is pretty dull. Only when operating together do they work with remarkable efficiency. When the scouting ants leave their nest in search of food, they leave behind a pheromone trail. Then, after they find a new food source, they can follow the trail back to the nest. Later when other ants come across this pheromone trail, they may abandon their random search for food to follow it directly to the source. 

The value of leaving a trail is as important as the exploration itself. A new type of recommendation engine might be the answer to avoid humans from becoming passive followers of someone else’s path. Just like ants, the solution may be to value the explorative behaviour and only then push others to follow.


**Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market - Article in Science · February 2006




Previous
Previous

Deterministic data & clean rooms

Next
Next

The Unstoppable Rise Of OTT & CTV in Asia