LCP

AWS Bedrock has made building generative AI applications an easy task for developers. With the emergence of powerful foundation models (FMs), developers can leverage capabilities like text generation, code completion, and question-answering. It's a fully managed service that makes high-performing FMs from leading AI startups (AWS Bedrock partners) and Amazon available for your use through a unified AWS Bedrock API. Think of it as a one-stop shop for accessing and using powerful AI models to build generative AI applications, all while maintaining security, privacy, and responsible AI practices. However, managing and customizing these models can be complex and resource-intensive.

We have covered what is AWS Bedrock, Foundation Models, and pricing in the previous AWS Bedrock blog, and now we will look at how to use AWS Bedrock and understand it with an example.

AWS Bedrock Offerings:

Before jumping into the use cases and technical stuff, here's a basic breakdown of the AWS Bedrock offerings for you to get the hang of Amazon Bedrock.

  • Access to top foundation models: Choose from a variety of FMs from companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon, each with its own strengths and specializations.
  • Easy experimentation and evaluation: Try out different FMs for your specific use case without having to worry about managing infrastructure or code.
  • Private customization: Fine-tune FMs with your own data to make them more relevant and accurate for your specific needs. You can do this through techniques like fine-tuning and Retrieval Augmented Generation (RAG).
  • Agent building: Create AI agents that can execute tasks using your enterprise systems and data sources. This opens up possibilities for automating workflows, generating creative content, and more.
  • Serverless experience: Focus on building your applications without having to manage any underlying infrastructure. Amazon Bedrock takes care of everything for you.
  • Secure and easy integration: Deploy FMs and AI agents into your applications using familiar AWS tools and services.

Use cases of AWS Bedrock:

Let's cover all the tasks you can leverage from AWS Bedrock models to develop your Generative AI application:

  • Text Generation: Create new pieces of original content, such as blog posts, social media posts, and web page copy.
  • Virtual assistants: Build assistants that understand user requests, automatically break down tasks, engage in dialogue to collect information, and take actions to fulfill the request.
  • Text undefined Image Search: Search and synthesize relevant information to answer questions and provide recommendations from a large corpus of text and image data.
  • Text summarization: Get concise summaries of long documents such as articles, reports, research papers, technical documentation, and even books to quickly and effectively extract important information.
  • Image Generation: Quickly create realistic and visually appealing images for ad campaigns, websites, presentations, and more.

'Text Search' Implementation using Meta's Llama 2 in AWS Bedrock

Here, we have selected the meta-llama2-chat-13b model ID for developing the Text Search Generative AI use case.

The Prompt:

The prompt is "What is the difference between a llama and an alpaca?" This prompt is to be sent to the LLama 2 Chat 13B model and expect a response for the same. The following code can be used to send the prompt using the AWS Bedrock’s API.

import boto3
import json

llamaModelId = 'meta.llama2-13b-chat-v1' 
prompt = "What is the difference between a llama and an alpaca?"

llamaPayload = json.dumps({ 
	'prompt': prompt,
    'max_gen_len': 512,
	'top_p': 0.9,
	'temperature': 0.2
})

bedrock_runtime = boto3.client(
    service_name='bedrock-runtime', 
    region_name='us-east-1'
)
response = bedrock_runtime.invoke_model(
    body=llamaPayload, 
    modelId=llamaModelId, 
    accept='application/json', 
    contentType='application/json'
)
body = response.get('body').read().decode('utf-8')
response_body = json.loads(body)
print(response_body['generation'].strip())

Response:

Llamas and alpacas are both members of the camelid family, 
but they are different species with distinct physical and behavioural characteristics... 

You can fine-tune the LLaMa Model by providing the prompts dataset in the Jsonl file. This is an example, where we see nothing after or under the last line, otherwise, you will have parse errors when you try and fine-tune the custom model.

{"prompt": "undefinedprompt textundefined", "completion": "undefinedexpected generated textundefined"}
{"prompt": "undefinedprompt textundefined", "completion": "undefinedexpected generated textundefined"}
{"prompt": "undefinedprompt textundefined", "completion": "undefinedexpected generated textundefined"} 

End Note

This AWS Bedrock blog series has provided a fundamental understanding of the generative AI service, its working, and its pricing. We have now understood, with an example, how to use AWS Bedrock and its models to develop a generative AI application. Meta's Llama 2 chat-13b model was used to generate the answer for our prompt.

We, at Seaflux, are AI undefined Machine Learning enthusiasts, who are helping enterprises worldwide. Have a query or want to discuss AI projects where LiteLLM can be leveraged? Schedule a meeting with us here, we'll be happy to talk to you.

Jay Mehta - Director of Engineering
Jay Mehta

Director of Engineering

Contact Us