- Career Center Home
- Search Jobs
- Software Engineer, TPU Inference, AI/ML
Results
Job Details
Explore Location
Google
Kirkland, Washington, United States
(on-site)
Posted
15 hours ago
Google
Kirkland, Washington, United States
(on-site)
Job Type
Full-Time
Software Engineer, TPU Inference, AI/ML
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Software Engineer, TPU Inference, AI/ML
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Description
Minimum qualifications:- Bachelor's degree or equivalent practical experience.
- 2 years of experience with coding in Python or 1 year of experience with an advanced degree.
- 2 years of experience with inference.
- 2 years of experience with large language models.
2 years of experience with machine learning algorithms.
Preferred qualifications:
- Master's degree or PhD in Computer Science, or a related technical field.
- 2 years of experience with Kubernetes.
- 2 years of experience in GPU programming.
- 2 years of experience with compilers.
- 2 years of experience in cloud.
About the job
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
As a Software Engineering in the TPU inference at scale team in the core ML, you will be working on from Large Language Model (LLM)/Non-LLM models bringup to performance tuning/optimization on Google Cloud TPUs.
The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world.
We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud's Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers.
The US base salary range for this full-time position is $147,000-$211,000 bonus equity benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Research and implement LLM models/recommendation/diffusion model architectures, ensuring their efficient and accurate execution on generations of TPUs.
- Guide significant performance improvements by leveraging TPU-specific hardware features, such as sparsecore, and conducting detailed analyses to quantify performance differentials between optimized and baseline implementations on GPUs.
- Collaborate closely with key customers to deeply understand their existing recommendation model deployments and facilitate their seamless transition and optimization for execution on TPUs.
- Implement models in Jax/PyTorch, verifying model correctness, ensuring performance across heterogeneous hardware.We are working on OSS projects such as vLLM, max diffusion, maxtext.
${qualifications}${responsibilities}
Requisition #: 125604915144729286
pca3lyuhf
Job ID: 83097890
Jobs You May Like
Community Intel Unavailable
Details for Kirkland, Washington, United States are unavailable at this time.
Loading...
