Microsoft Azure expands availability of OpenAI GPT-4 and GPT-35-Turbo models to more regions

Devesh Beri

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

Microsoft’s Azure AI infrastructure has made an expansion, offering OpenAI’s GPT-4 and GPT-35-Turbo models in various global regions. The service now covers regions like Australia East, Canada East, East United States 2, Japan East, and United Kingdom South.

The recently launched ND H100 v5 Virtual Machine series, featuring NVIDIA H100 Tensor Core GPUs and Quantum-2 InfiniBand networking, these virtual machines cater to complex AI workloads. This seems to be Microsoft’s response to the increasing complexities of AI applications.

The ND H100 v5 VMs include the following features:

  • Eight NVIDIA H100 Tensor Core GPUs and VMs offer faster AI performance.
  • 4th Gen Intel Xeon Scalable processors
  • NVIDIA Quantum-2 ConnectX-7 InfiniBand ensures networking at 400Gb/s per GPU.
  • PCIe Gen5 provides 64GB/s bandwidth per GPU for host-to-GPU performance.
  • VMs feature DDR5 memory for large datasets and high data transfer speeds.
  • Performance tests show up to 6x speedup in matrix operations using 8-bit FP8.
  • ND H100 v5 VMs, achieve 2x speedup in large language models.

Azure OpenAI Service has nearly tripled its customer base since the last update. The service currently serves over 11,000 customers, adding an average of 100 new customers daily this quarter.

Businesses can use GPT-4 for tasks like content generation, document intelligence, and customer service, potentially opening doors to innovation and transformation across various sectors. 

via MSPowerUser