Why Work at Lenovo
Description and Requirements
Job Responsibilities:
Participate in the design, development, and maintenance of big data platforms to support massive data storage, computation, and analysis needs. Assist in building data warehouses, real-time/offline data processing systems, and optimizing data pipeline performance. Contribute to ETL (Extract, Transform, Load) workflow development, supporting data cleansing, transformation, and modeling based on business requirements. Learn and apply big data frameworks (e.g., Hadoop, Spark, Flink) to solve business problems. Collaborate with algorithm engineers and product managers to enable data-driven decision-making.
Requirements:
Education: Fresh graduates with a bachelor’s or master’s degree in Computer Science, Software Engineering.
Skills:
- Proficiency in at least one programming language (Python/Scala).
- Familiarity with big data ecosystem tools (Hadoop/Spark/Hive/HBase/Kafka, etc.).
- Strong SQL skills and foundational data analysis capabilities.
- Basic understanding of distributed systems and database principles.
Preferred Qualifications:
- Experience in big data-related internships or projects.
- Knowledge of OLAP engines (ClickHouse/Doris/StarRocks, etc.).
- Familiarity with cloud platforms (AWS/Alibaba Cloud) or containerization technologies (Docker/Kubernetes).
Soft Skills:
- Strong logical thinking, self-learning ability, and passion for technology.
- Excellent communication skills and teamwork mindset.