• Overclocked
  • Posts
  • Amazon Shoots for the Moon With Its AI Strategy

Amazon Shoots for the Moon With Its AI Strategy

Welcome to this week's edition of Overclocked!

This week, we're diving into Amazon's ambitious plans with its upcoming Nova AI model and exploring OpenAI's $50 million NextGenAI initiative poised to transform science and research. Stay tuned for these stories and more in our Weekly Scoop. ⬇️​

In today’s newsletter ↓
😤 Amazon's AI giant awakens​
↪️ OpenAI's $50M NextGenAI initiative​
😮 LA Times AI tool advocates for KKK
⏱️ Run your first local model in less than 15 minutes

💫 Amazon Shoots For The Moon With its AI Strategy

🏃 How will it compete with industry titans?

Amazon is making significant strides in the AI landscape with its forthcoming Nova reasoning model, Alexa+, and substantial investments in AI startups like Anthropic. These developments position Amazon as a formidable contender in the AI arena.​

🤖 The jump into hybrid reasoning models

Set to launch by June, this Nova-family reasoning model will focus on enhanced reasoning capabilities. The model aims to merge quick responses with complex reasoning, tackling more challenging problems through techniques like chain-of-thought. 

Amazon's priority is to make it smarter, faster, and more cost-efficient than the major players already occupying this space. This initiative is part of its broader strategy to invest in its AI models while offering diverse choices through its Bedrock AI platform.

💹 The investment in Anthropic creates a strong strategic alliance

Amazon's investment in Anthropic has grown significantly, with the current value of its stake estimated at $14 billion as of December, up from $8 billion earlier. This represents a 75% gain, or $6 billion on paper, since Amazon began investing in Anthropic in 2023. 

Photo by Amazon

Anthropic, known for its Claude AI models, has seen its valuation rise significantly, currently raising funds at a $60 billion valuation. This strategic partnership not only boosts Amazon's AI portfolio but also strengthens its position in the competitive AI market. 

↪️ OpenAI's $50M NextGenAI Initiative

Empowering the next generation of AI researchers

OpenAI has launched 'NextGenAI,' comprising of 15 leading research institutions dedicated to leveraging AI for accelerating scientific breakthroughs and transforming education. With a commitment of $50 million in research grants, compute funding, and API access, OpenAI aims to support students, educators, and researchers in advancing the frontiers of knowledge.​

🎓 Catalyzing collaborative research

The 'NextGenAI' initiative seeks to foster collaboration among top research institutions, enabling a collective effort to tackle complex scientific challenges. By providing resources and support, OpenAI aims to catalyze progress at a pace unattainable by individual entities, preparing the next generation to shape the future of AI.​

🔬 Transforming education and research

Through this initiative, OpenAI envisions a transformation in how research and education are conducted, utilizing AI to unlock new possibilities and drive innovation. The focus is on creating a synergistic environment where AI serves as a tool to amplify human ingenuity and discovery.

The Weekly Scoop 🍦

⏱️ Run Your Very First Local AI Model In Less Than 15 Minutes

Here's a step-by-step guide on how to download and run Ollama and an AI model on your laptop. You can follow the instructions below or watch this video:

1. Download and Install Ollama:

  • Go to the Ollama website.

  • Download the installer for your operating system (Windows, macOS, or Linux).

  • Double-click the installer and follow the on-screen instructions to complete the installation.

2. Verify Ollama Installation:

  • Open a command prompt (Windows) or terminal (macOS/Linux).

  • Type Ollama and press Enter. Successful installation will show some output; otherwise, troubleshoot the installation.

3. Choose and Download an AI Model:

  • Check the Ollama library (Ollama.com/library) for a list of available models. Note that model sizes vary greatly (some are over 200GB!), requiring significant storage space and RAM.

  • Select a model compatible with your system's resources.

  • Important Note: You’ll need enough disk space and RAM to accommodate the chosen model. Larger models require considerably more resources.

4. Run the Chosen Model:

  • In your command prompt or terminal, type Ollama run <model_identifier>, replacing <model_identifier> with the name of the model you selected (e.g., Ollama run llama-2).

  • Ollama will download and install the model if it's not already present. This will take some time, depending on the model's size and your internet speed.

  • Once downloaded, you can interact with the model through the command line interface that appears.

5. (Optional) Manage Multiple Models:

  • You can download and run multiple models. To list currently installed models type Ollama list.

  • To switch between models, use Ollama run <model_name>.

  • To remove a model use Ollama rm <model_name>.

The video also shows how to use Ollama's HTTP API for programmatic access from languages like Python, but that's a separate advanced step. Remember to check system requirements before proceeding!

That’s it for this week’s Overclocked. Do you think Amazon is poised to become the next leader in AI? And, what did you think about installing your first local model? Let us know your thoughts! 

Zoe from Overclocked