General Information

Req #
WD00077661
Career area:
Sales Support
Country/Region:
Malaysia
State:
Wilayah Persekutuan Kuala Lumpur
City:
Kuala Lumpur
Date:
Monday, March 10, 2025
Working time:
Full-time
Additional Locations
* Malaysia

Why Work at Lenovo

We are Lenovo. We do what we say. We own what we do. We WOW our customers. 

Lenovo is a US$57 billion revenue global technology powerhouse, ranked #248 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). 

This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub.

Description and Requirements

Key Responsibilities:

 AI Production Deployment:

  • from testing to final deployment.
  • Configure, install, and validate AI systems using key platforms, including:
  • VMware ESXi and vSphere for server virtualization, Linux (Ubuntu/RHEL) and Windows Server for operating system integration,
  • Docker and Kubernetes for containerization and orchestration of AI workloads.
  • Conduct comprehensive performance benchmarking and AI inferencing tests to validate system performance in production.
  • Optimize deployed AI models for accuracy, performance, and scalability to ensure they meet production-level requirements and customer expectations.
 

Technical Expertise:

  • Serve as the primary technical lead for the AI POC deployment in enterprise environments, focusing on AI solutions powered by Nvidia GPUs.
  • Work hands-on with Nvidia AI Enterprise and GPU-accelerated workloads, ensuring efficient deployment and model performance using frameworks such as PyTorch and TensorFlow.
  • Lead technical optimizations aimed at resource efficiency, ensuring that models are deployed effectively within the customer’s infrastructure.
  • Ensure the readiness of customer environments to handle, maintain, and scale AI solutions post-deployment.

Project Management:

  • Assume complete ownership of AI project deployments, overseeing all phases from planning to final deployment, ensuring that timelines and deliverables are met.
  • Collaborate with stakeholders, including cross-functional teams (e.g., Lenovo AI BDMS, solution architects), customers, and internal resources to coordinate deployments and deliver results on schedule.
  • Implement risk management strategies and develop contingency plans to mitigate potential issues such as hardware failures, network bottlenecks, and software incompatibilities.
  • Maintain ongoing, transparent communication with all relevant stakeholders, providing updates on project status and addressing any issues or changes in scope.

 Knowledge Transfer and Documentation:

  • Develop and deliver detailed documentation for each deployment, covering installation procedures, system configurations, and validation reports, ensuring operational teams have clear guidance on managing the deployed systems.
  • Conduct post-deployment knowledge transfer sessions to educate client teams on managing AI infrastructure, troubleshooting common issues, and optimizing AI models.
  • Provide comprehensive training sessions on the operation, management, and scaling of AI systems, ensuring that customers are fully prepared for ongoing operations post-handoff.

Qualifications:

 Educational Background:

·       Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience in AI infrastructure deployment.

 Experience:

  • Minimum 5+ years of experience in deploying AI/ML models using Nvidia GPUs in enterprise production environments.
  • Demonstrated success in leading and managing complex AI infrastructure projects, including PoC transitions to production at scale.

 Technical Expertise:

  • Extensive experience with Nvidia AI Enterprise, GPU-accelerated workloads, and AI/ML frameworks such as PyTorch and TensorFlow.
  • Proficient in deploying AI solutions across enterprise platforms, including VMware ESXi, Docker, Kubernetes, and Linux (Ubuntu/RHEL) and Windows Server environments.
  • MLOps proficiency with hands-on experience using tools such as Kubeflow, MLflow, or AWS SageMaker for managing the AI model lifecycle in production.
  • Strong understanding of virtualization and containerization technologies to ensure robust and scalable deployments.

Additional Locations
* Malaysia
* Malaysia