Accelerating the speed of data insights

Creating Unique Suggestions for Every User

LIKE THIS STORY:
Rating Unavailable
LIKES SO FAR

Personalization is the holy grail of engagement. And it’s no wonder: Harvard Business Review reports that personalized user experience can deliver five to eight times the return on investment of marketing dollars and improve sales by 10% or more.

While personalized content builds deeper relationships and a better understanding of users, the mass of data required to create effective recommendations is daunting. Enter artificial intelligence (AI) engines with advanced data center infrastructures and high-performance memory and storage solutions.

These recommendation engines now dominate the online experience, and the biggest example is Amazon. According to a McKinsey report, over 35% of the retail giant’s sales come from recommendations. And these engines power more than shopping: Streaming sites display movies or shows users are likely to be interested in, job searches display opportunities users are qualified for, and news and social feeds are populated with relevant content.

For streaming, three out of four Netflix® users choose movies suggested by its recommendation engine, and 80% of Netflix’s overall stream time is driven from these suggestions. Services like Hulu™ have added “like” and “dislike” functions to give users more control over what recommendations they see.

Behind the scenes, data centers are creating this highly personalized internet. The algorithms are so sophisticated that recommendations have become the user experience. And like so many other advanced technologies, recommendation engines would not exist without memory and storage solutions like those that Micron produces.

What is a recommendation engine?

Simply put, recommendation engines are systems that suggest information based on the rating or preference a user would likely give an item.

It’s all about the data. For recommendation engines, the more data, the more accurate the results. When a suggestion is given, it’s been filtered in one of the following ways:

  • Generic: The simplest filter identifies items that are similar to what a user searched for or what is most popular.
  • Content: This filter examines user history, identifies keywords describing the choices, and makes suggestions of similar content.
  • Collaborative: Based on history, a user is assigned to a group. The items liked by other members of the group are presented.
  • Ensemble: This approach uses a combination of multiple filters.

Each of these filters is increasingly complex. The ensemble approach is the most accurate, requires the most data, and is the most difficult to execute.

In the case of a streaming media, to make accurate recommendations, the engine requires data on a film’s genre, the synopsis, the actors and directors, the user’s movie-watching history — and all this same data on a huge pool of people with similar watching habits. It then layers on reviews, social comments and even language from the screenplay. It’s a lot of data, and a massive amount of memory and storage is required to handle these workloads.

How do memory and storage technologies like Micron’s fuel recommendation engines?

2. Filtering and preprocessing
The machine learning system holds millions of customers’ history and actions, with the system constantly updating. This data is often captured in an unstructured form. Before the data can be useful, it must first be filtered and distilled to the key information and organized in an efficient way. Imagine that finding the data point you need in unstructured data is like searching for Waldo in the popular “Where’s Waldo?” children’s books, only the crowds of people in silly circumstances are moving. Poor Waldo may never be found. Now imagine that all the people surrounding Waldo are standing still and organized into a grid pattern. Finding Waldo would be easier (though arguably less fun). Filtering and preprocessing data is essentially organizing the chaos of a moving crowd into orderly lines and grids. Organizing data is a problem best solved by CPUs and supported by server DRAM, such as DDR5, which temporarily holds the data being preprocessed and feeds it rapidly to the processor. Fast NVMe™ SSDs store the data once it is processed and becomes structured; it will then be used for AI training.

3. Training
Here AI teaches the engine to recognize content. For example, a system might analyze billions of images until it learns how to recognize a dog. This requires passing pieces of data hundreds — or thousands — of times through the training system. And the model is retrained with updated databases on a regular basis, as new data flows in and users interact. This process requires extremely powerful, flexible data centers to run complex training algorithms. Forms of high-bandwidth memory, such as Micron’s family of Ultra-Bandwidth Solutions, feed data over and over again at super-high speeds to the graphics processing unit (GPU) or CPU, which makes the logical connections to create the AI algorithm. The demand for more memory in the training process continues to grow as the amount of data grows and the AI algorithms get more complex. But it’s not just more memory that’s required. It’s new memory that will bring about smarter and faster AI — new memory that moves 2 bits of information down each wire rather than 1, for example, or memory that is stacked 3D and moved so close to the processing unit that it’s in the same chip package. Micron is at the forefront of trailblazing new memory innovations.

4. Recommendation
Next is inference, when a trained system is asked whether a movie has a dog in it. Once it recognizes a dog, it can make a recommendation. This may be done millions of times a minute by different users and can happen in the data center or close to the end users, sometimes right on their phones or laptops. High-performance memory ensures that recommendations are made quickly enough to be meaningful to the user and profitable for the provider.

5. Optimization
User interactions with recommendations are fed back into the data collection phase to continually optimize future recommendations, enabling the engines to learn and become more accurate.

Memory and storage play a role in each phase of the recommendation engine process by reducing the time it takes to retrieve and move data, by keeping the processing units satiated with the data they need and by storing the vast and growing ocean of data created each day. Without products like those that Micron manufactures, creating recommendation engines would be impossible.

What is the future of recommendation engines?

Recommendation engines have changed the user experience — and business model — of online services. It makes sense, then, that sites are looking for new ways to employ recommendations on their platforms.

For instance, Ben Allison, Amazon machine learning scientist, notes that past user events are not of equal importance. Understanding that customer behavior is incredibly complex, Amazon now tasks neural networks to discern the importance of a past behavior (based on context and time) and give it an “attention score.” These attention scores become a key part of a more sophisticated recommendation algorithm.

In addition, Amazon has learned that “predictable” forecasts are not really ideal. By adding in some “randomness”, they have been able to replicate the “serendipitous discovery” that all shoppers want. So today, Amazon’s recommendations are derived more from AI “decision-making,” not just vanilla prediction.

Some sites are having human editors interact in real time with recommendation engines to make algorithms even more accurate. At Hulu, for instance, “a team of content experts will work more closely together, creating additional curated collections that are more personalized for viewers.”

And Netflix is using recommendation algorithms to define its catalog of movies and TV shows by learning characteristics that make content successful: “We use it to optimize the production of original movies and TV shows in Netflix’s rapidly growing studio. It also powers our advertising spend, channel mix, and advertising creative so that we can find new members who will enjoy Netflix.”

For current and future recommendation functionality, maximum data volume and maximum speed are critical. The data storage, AI training and inference of recommendation engines require both high-performance and low-power memory and storage.

Micron’s broad portfolio of solutions spans the recommendation engine requirements — from high-bandwidth memory and accelerators for intensive training to standard memory for inference and to high-capacity storage for a variety of data. Chances are good that, if a shopper is recommended the perfect Christmas gift or a viewer the perfect show to watch, Micron memory and storage were involved in that recommendation along the way.

Download the Infographic

Learn more about Micron products that make recommendation engines possible.

Micron is a registered trademark of Micron Technology, Inc. All other trademarks and registered trademarks referenced in this article are the property of their respective owners and are included for reference only. Inclusion of other trademarks, registered trademarks, or brands does not constitute an endorsement or promotion by Micron or signify a business relationship though one may exist.

+
+
无码中字 出轨中字 人妻中字,午夜福利国产在线观看1,又色又爽又黄的视频,国产乡下三级全黄三级,亚洲日本香蕉视频观看视频