Amazon Launches Bedrock: Generative AI Tool Now Available in France

    During its Paris summit this week, the director of AWS France announced the availability of Amazon Bedrock generative AI tools for French customers, alongside other models provided to users.

    At the beginning of 2023, Amazon unveiled its own offering of general artificial intelligence for its cloud customers: Amazon Bedrock. Until now, this tool has been exclusively available in certain regions of its cloud service. The company took advantage of its Amazon Web Services Summit in Paris this week to officially launch its tool in the Europe (Paris) region. This means that French customers will be able to utilize Amazon Bedrock while ensuring that the data processed by the tool remains in Europe, resulting in lower latency times than before. Amazon states that several hundred customers are already using its solution in France, including Air Liquide, EDF, Engie, and Accor.

    Amazon Bedrock is a tool that allows users to leverage various AI models available in the market, adapt them to specific scenarios with private data, and ultimately build applications. The service is accessible via an API, and the company assures that customer data is not used for training the models. However, this data can be used within secure instances focused on specific contexts, thus enhancing the quality and relevance of responses.

    The company offers its "in-house" models with Bedrock, such as Amazon Titan, as well as those from major market players like:

    • Meta's LLama 2
    • Anthropic's various Claude models
    • The open-source image generation model Stable Diffusion
    • The LLMs offered by the French Mistral AI, with Mistral Large previously only available on Azure. Mistral has also announced the availability of its new language model, Mistral Large, on Bedrock. The French company has already offered two of its models, Mistral 7B and Mistral 8x7B, on the Amazon platform since February.

    In addition to Bedrock, Amazon's AI efforts in the cloud are also reflected in AWS Trainium chips, dedicated to AI model training, and AWS Inferentia for accelerating the performance of generative AI-based applications.