基本信息

职位编号:
100015689
工作领域:
Data Management and Analytics
国家/地区:
中国
省:
广东
市:
深圳(Shenzhen)
日期:
星期五, 2 月 21, 2025
其他工作城市
* China

为什么选择联想

联想文化,我们称之为 “We Are Lenovo”(我们,就是联想),其核心是:“说到做到,尽心尽力,成就客户”。

联想集团是一家年收入569亿美元的全球化科技巨头,位列《财富》世界500强第248名,服务遍布全球180个市场数以百万计的客户。为实现“智能,为每一个可能” 的公司愿景,联想在不断夯实全球个人电脑市场冠军地位的基础上,积极构建全栈式的计算能力,现已拥有包括人工智能赋能、人工智能导向和人工智能优化的终端、基础设施、软件、解决方案和服务在内的完整产品路线图,包括个人电脑、工作站、智能手机、平板电脑等终端产品,服务器、存储、边缘计算、高性能计算以及软件定义等基础设施产品。这一变革与联想改变世界的创新一起,共同为世界各地的人们成就一个更加包容、值得信赖的智慧未来。联想集团有限公司在香港交易所上市(港交所:992)(美国预托证券代号:LNVGY)。

欢迎访问联想官方网站 https://www.lenovo.com,并关注“联想集团”微博及微信公众号等社交媒体官方账号,或关注“联想招聘”公众号,获取联想最新动态。

职位描述和要求:

Job Responsibilities

1.       Be responsible for PC service parts quality analysis: build up prediction and early warning models: Utilize historical Service quality data to achieve dynamic prediction of service parts quality, detect potential parts quality risks in advance, issue timely warnings, provide strong support for service parts quality control.

a)       In - depth AI - driven Data Mining of Spare Parts Quality Data: Apply advanced AI technologies, such as machine learning algorithms to comprehensively mine the PC service parts quality data. Precisely identify potential quality risk factors from massive quality data.

b)      Data Visualization and Report Generation: Develop interactive data visualization interfaces to display key indicator, trend changes, and abnormal situations. Regularly generate high - quality analysis reports.

2.       Be responsible for PC service Parts Quality big data platform operation:

a)       Conduct data processing and quality analysis based on Java and big data frameworks (such as Spark and Flink). Write complex Java code, Spark SQL, and Hive SQL statements to clean, transform, and deeply analyze the collected data.

b)      Data Platform Maintenance and Performance Optimization: Be responsible for the daily maintenance and performance optimization of the big data platform. Use Java and related technologies to solve problems that occur during the platform operation. Continuously optimize data processing processes and algorithms to improve data processing efficiency and analysis accuracy, ensuring the stable and efficient operation of the data platform to meet the growing business needs.

Job Requirements

A.      Educational Background: A bachelor's degree or above, with a preference for majors related to computer science, data science,

B.       Work Experience: 3 - 6 years of relevant data analysis experience, with at least 2 years focused on the PC parts quality management or quality data analysis of similar manufacturing products,

C.      Familiar with entire supply chain process of PC parts production, inspection, storage, and transportation.

D.      Skill Requirements.

a)       Have a thorough command of the Java language, be familiar with common Java development frameworks (such as Spring Boot and Spring Cloud), and possess a good object - oriented programming concept and code - writing habits.

b)      Be proficient in Python, master machine learning and deep - learning frameworks (such as TensorFlow and PyTorch), and be able to independently complete complex quality data mining and modeling tasks.

c)       Be proficient in big data development technologies, including the Hadoop ecosystem (HDFS, MapReduce, Yarn, Hive, etc.), big data processing frameworks such as Spark and Flink, and message queue technologies such as Kafka.

d)      Be skilled in using SQL for data query, cleaning, and analysis. Be familiar with database management systems (such as MySQL and Oracle) and have the ability to design and manage data warehouses.

e)       Be good at using data visualization tools to create high - quality, highly interactive quality data reports and dashboards.

f)        Have basic English communication skills, be able to read English technical documents, and participate in international business exchanges.


其他工作城市
* China
* China