X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

GenAI Functionality Continues to Impress with Additional Features and Assistants

Spending a week at AWS Re:invent was a chance to catch up with the latest and greatest not only on AWS, but also in the world of generative AI. To think, ChatGPT was announced the same week as Re:Invent 2022, and oh, how times have changed. It was honestly difficult to find a session or booth that wasn’t talking about generative AI in one way or another.

Here are the three things I took away from AWS Re:Invent.

1. Impressive Generative AI Functionality

AWS has incorporated some impressive generative AI functionality into their AWS tools and services which should enhance the developer experience. It is likely you have seen the AI assistant ‘Q’ which can live in your console and help you with a myriad of topics. I think new users and customers to AWS will find this particularly helpful as they ramp-up on the nuances of AWS. Another interesting feature I saw but you may not had heard of in the news, but something I personally had been wondering for a while is how LLMs would co-exist with the more traditional chatbot technologies such as AWS Lex and I had the opportunity to attend a session in which I was very impressed by how the two were combined. You can read in more detail here, but, in summary,

  1.  you can now not only use generative AI to help build out your traditional chatbot workflows to help answer the question ‘what are all the ways a user may ask this question, and
  2. you can now use LLMs combined with a RAG approach to help answer FAQ type questions from your own documents.

There a lot of really cool features included here and I was impressed with the eloquence in how they were implemented so I definitely recommend checking it out if you are a Lex user. Kudos to the AWS team.

2. AWS Bedrock Helps You Build with GenAI

It is becoming very easy to build something with a generative AI model. AWS has clearly spent a ton of effort to make it as easy as possible. Services like AWS Bedrock make interacting and building with generative models from a multitude of model providers about as easy as interacting with any API and it works on a token consumption basis making it relatively accessible for the smaller use cases. There were certainly plenty of sessions describing or showing attendees how they could build with Bedrock. The other main theme running through the generative AI sessions was RAG (Retrieval Augmented Generation) approaches to reduce hallucinations. I believe, and AWS seems to agree, that it is a great way to reduce hallucinations in large language models.

3. We’re Still in the 1st Inning of the Ball Game

I recall in a few different sessions a facilitator admitting that there really weren't any best practices at this point, or "no one really knows the answer" to various questions. I found that intellectual humility refreshing as well as extremely exciting. I had the opportunity to attend a few sessions on some topics that I don’t particularly hear many people talking about, but are certainly important to have a handle on if you are going to leverage LLMs. Some examples of these topics include measuring and safeguarding against toxicity in LLM responses, LLMOps, and optimizing LLMs for speed and efficiency. There is still so much that will be figured out as it relates to the generative AI technology and I personally have no reason to believe the rate of innovation will slow down.

I can’t help but feel a little excited about the future and wishing I could attended more sessions. All-in-all, it was a great re:Invent and I cannot wait to see what another year of innovation brings.