- Career Center Home
- Search Jobs
- Machine Learning Hardware Architect, Google Cloud
Results
Job Details
Explore Location
Google
Sunnyvale, California, United States
(on-site)
Posted
4 days ago
Google
Sunnyvale, California, United States
(on-site)
Job Type
Full-Time
Machine Learning Hardware Architect, Google Cloud
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Machine Learning Hardware Architect, Google Cloud
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Description
Minimum qualifications:- Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience.
- 5 years of experience in computer architecture, chip architecture, IP architecture, co-design, performance analysis, or hardware design.
- Experience in developing software systems in C or Python.
Preferred qualifications:
- Master's degree or PhD in Electrical Engineering, Computer Engineering or Computer Science, with an emphasis on Computer Architecture, or a related field.
- 8 years of experience in computer architecture, chip architecture, IP architecture, co-design, performance analysis, or hardware design.
- Experience in processor design or accelerator designs and mapping ML models to hardware architectures.
- Experience with deep learning frameworks including TensorFlow and PyTorch.
- Knowledge of Machine Learning market, technological and business trends, software ecosystem, and emerging applications.
Knowledge of hardware/software stack for deep learning accelerators.
About the jobIn this role, you'll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems.
In this role, you will be at the forefront of advancing ML accelerator performance and efficiency, employing a comprehensive approach that spans compiler interactions, system modeling, power architecture, and host system integration. You will prototype new hardware features, such as instruction extensions and memory layouts, by leveraging existing compiler and runtime stacks, and develop transaction-level models for early performance estimation and workload simulation. A critical part of your work will be to optimize the accelerator design for maximum performance under strict power and thermal constraints; this includes evaluating novel power technologies and collaborating on thermal design. Furthermore, you will streamline host-accelerator interactions, minimize data transfer overheads, ensure seamless software integration across different operational modes like training and inference, and devise strategies to enhance overall ML hardware utilization. To achieve these goals, you will collaborate closely with specialized teams, including XLA (Accelerated Linear Algebra) compiler, Platforms performance, package, and system design to transition innovations to production and maintain a unified approach to modeling and system optimization.
The AI and Infrastructure team is redefining what's possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide.
We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.
The US base salary range for this full-time position is $156,000-$229,000 bonus equity benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Create differentiated architectural innovations for Google's semiconductor Tensor Processing Unit (TPU) roadmap.
- Evaluate the power, performance, and cost of prospective architecture and subsystems.
- Collaborate with partners in Hardware Design, Software, Compiler, ML Model and Research teams for hardware/software co-design.
- Work on Machine Learning (ML) workload characterization and benchmarking.
- Develop architecture for differentiating features on next generation TPUs.
${qualifications}${responsibilities}
Requisition #: 95840692730766022
pca3lyuhf
Job ID: 81569740
Jobs You May Like
Median Salary
Net Salary per month
$8,538
Median Apartment Rent in City Center
(1-3 Bedroom)
$3,043
-
$5,403
$4,223
Safety Index
76/100
76
Utilities
Basic
(Electricity, heating, cooling, water, garbage for 915 sq ft apartment)
$100
-
$500
$255
High-Speed Internet
$50
-
$100
$64
Transportation
Gasoline
(1 gallon)
$4.58
Taxi Ride
(1 mile)
$3.49
Data is collected and updated regularly using reputable sources, including corporate websites and governmental reporting institutions.
Loading...
