- Career Center Home
- Search Jobs
- Principal Engineer - Distributed AI Systems Architecture (Heterogeneous Compute)
Results
Job Details
Explore Location
Intel
Santa Clara, California, United States
(on-site)
Posted
1 day ago
Intel
Santa Clara, California, United States
(on-site)
Job Type
Full-Time
Principal Engineer - Distributed AI Systems Architecture (Heterogeneous Compute)
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Principal Engineer - Distributed AI Systems Architecture (Heterogeneous Compute)
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Description
Job Details:Job Description:
We are seeking a Principal Engineer to define and architect the next generation of distributed AI systems across heterogeneous compute platforms, including CPUs, GPUs, IPUs/FNICs/FNICs, and emerging dataflow accelerators.
This role focuses on one of the hardest problems in modern computing:
How to dynamically execute and optimize large-scale AI computation graphs across diverse hardware while managing state, locality, and performance at system scale.
You will operate at the intersection of systems architecture, high-performance computing, and AI infrastructure-defining the execution model, runtime abstractions, and placement strategies that turn a rack of heterogeneous devices into a coherent, programmable system.
Key Responsibilities
1. Dynamic Execution of Distributed Computation Graphs
• Define a runtime model for executing AI workloads as distributed computation graphs across heterogeneous resources
• Design abstractions for graph representation, dependencies, and execution semantics
• Enable dynamic scheduling and execution across CPUs, GPUs/specialized accelerators, and IPUs/FNICs., and specialized accelerators
2. Stateful Scheduling and Memory-Centric Architecture
• Architect systems where state (e.g., KV cache) is a first-class concern in scheduling and execution
• Distributed Inferencing solution: Define models for data locality, memory hierarchy, and state ownership
• Optimize for minimal data movement and efficient access to distributed state
3. Graph Introspection and Automated Partitioning
• Develop mechanisms to analyze AI computation graphs and classify stages by:
o compute intensity
o memory bandwidth requirements
o communication cost
o latency sensitivity
• Drive automated or semi-automated partitioning of workloads across heterogeneous compute
4. Integration of Specialized Accelerators
• Architect frameworks that treat specialized accelerators (e.g., dataflow engines) as first-class execution targets
• Define execution boundaries, data exchange models, and integration strategies across device classes
• Enable interoperability across diverse compute paradigms without sacrificing performance
5. MoE-Aware Execution and Adaptive Placement
• Design runtime strategies for Mixture-of-Experts (MoE) models, including:
o expert placement
o routing locality
o load balancing vs data movement trade-offs
• Enhance existing frameworks for MOE and optimize communication path with IPUs/FNICs and compute path with Intel Accelerators.
• Enable adaptive execution based on real-time system signals (latency, utilization, skew)
6. Adaptive Runtime and Feedback-Driven Optimization
• Define observability and telemetry models for distributed AI execution
• Build feedback loops that continuously optimize placement, scheduling, and resource utilization
• Drive system-level performance across latency, throughput, and efficiency metrics
Qualifications:
Minimum Qualifications:
• Bachelor's or BS degree in Computer Science, Software Engineering, or a related specialized field, or equivalent experience per business needs.
• 12-plus years of experience with a Bachelor's degree
• Proven expertise in defining and implementing software architectures for AI frameworks, protocols, and algorithms.
• Deep experience in systems architecture, high-performance computing, or distributed systems
• Strong background in parallel or data-parallel computation models
• Experience with heterogeneous compute environments (CPU, GPU, DSP, or accelerators)
• Proven ability to design end-to-end systems from abstraction through implementation
• Strong understanding of performance trade-offs across compute, memory, and interconnect
Preferred Qualifications:
8-plus years of experience with a Master's degree, or 6-plus years of experience with a PhD.
• Experience with AI/ML systems, inference infrastructure, or large-scale model serving
• Familiarity with stream processing, dataflow models, or graph execution systems
• Knowledge of modern AI frameworks or runtimes
• Experience building developer-facing SDKs or programming models
• Background in performance optimization and benchmarking
Requirements listed would be obtained through a combination of industry relevant job experience, internship experiences and or schoolwork/classes/research.
• Operate as a technical leader and architect, not just an implementer
• Drive cross-team alignment across hardware, software, and infrastructure
• Influence long-term system design and platform direction
• Mentor engineers and shape architectural thinking across the organization
Job Type:
Experienced Hire
Shift:
Shift 1 (United States of America)
Primary Location:
US, California, Santa Clara
Additional Locations:
US, Oregon, Hillsboro, US, Texas, Austin
Business group:
At the Data Center Group (DCG), we're committed to delivering exceptional products and delighting our customers. We offer both broad-market Xeon-based solutions and custom x86-based products, ensuring tailored innovation for diverse needs across general-purpose compute, web services, HPC, and AI-accelerated systems. Our charter encompasses defining business strategy and roadmaps, product management, developing ecosystems and business opportunities, delivering strong financial performance, and reinvigorating x86 leadership. Join us as we transform the data center segment through workload driven leadership products and close collaboration with our partners.
Posting Statement:
All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Position of Trust
This role is a Position of Trust. Should you accept this position, you must consent to and pass an extended Background Investigation, which includes (subject to country law), extended education, SEC sanctions, and additional criminal and civil checks. For internals, this investigation may or may not be completed prior to starting the position. For additional questions, please contact your Recruiter.
Benefits
We offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock bonuses, and benefit programs which include health, retirement, and vacation. Find out more about the benefits of working at Intel .
Annual Salary Range for jobs which could be performed in the US: $255,850.00-361,200.00 USD
The range displayed on this job posting reflects the minimum and maximum target compensation for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific compensation range for your preferred location during the hiring process.
Work Model for this Role
This role will require an on-site presence. * Job posting details (such as work model, location or time type) are subject to change.
*
ADDITIONAL INFORMATION: Intel is committed to Responsible Business Alliance (RBA) compliance and ethical hiring practices. We do not charge any fees during our hiring process. Candidates should never be required to pay recruitment fees, medical examination fees, or any other charges as a condition of employment. If you are asked to pay any fees during our hiring process, please report this immediately to your recruiter.
Requisition #: JR0283339
pca3lyuhf
Job ID: 83338136

Intel
United States
Managing your career and your personal life can be challenging. Intel is committed to making it easier. We want to help our employees make the most of both worlds. Whether you are a parent or have education goals, eldercare responsibilities, or just some of life's details to attend to, we have a variety of programs in place around the world to help. To address the diverse needs of our employees, we offer a range of options that varies across businesses, geographies, sites, and job types.
View Full Profile
More Jobs from Intel
CPU Circuit Design Engineer
Austin, Texas, United States
4 hours ago
Physical Design Engineering Manager
Austin, Texas, United States
4 hours ago
Senior Middleware Development Engineer
Hillsboro, Oregon, United States
5 hours ago
Jobs You May Like
Median Salary
Net Salary per month
$7,990
Cost of Living Index
80/100
80
Median Apartment Rent in City Center
(1-3 Bedroom)
$2,950
-
$4,420
$3,685
Safety Index
64/100
64
Utilities
Basic
(Electricity, heating, cooling, water, garbage for 915 sq ft apartment)
$155
-
$281
$218
High-Speed Internet
$50
-
$110
$75
Transportation
Gasoline
(1 gallon)
$4.73
Taxi Ride
(1 mile)
$4.83
Data is collected and updated regularly using reputable sources, including corporate websites and governmental reporting institutions.
Loading...
