为什么选择联想
职位描述和要求:
Job Description:
This person will be responsible for designing, developing, and deploying Service Supply Chain System, which dedicates to interact with users and fully support their daily work and operations seamlessly, providing ‘Business Dashboards’ with ‘Insightful Analysis’ for managers and leaders.
The ideal candidate will be detail oriented and driven to system developing and maintenance. The candidate should have strong communication skills and ability to prioritize effectively to ensure timelines are met. There is always room to make things better, so this candidate should also have the ability to invent and simplify. Lastly, the candidate should have an ability to work effectively with cross-functional teams and an ability to work in a fast-paced and ever-changing environment.
The position represents an exciting opportunity to be a part of a dynamic and high paced environment, supporting a global organization and offers significant opportunities for rapid growth.
Job responsibilities
This role will involve:
1. Maintaining and improving the existing system to meet complexed and rapidly changed business scenario.
2. Creating workflow tools with a frontend user interface and backend database structure in business areas, like planning or procurement process.
3. Building big data platform to meet specific business requirements using Hadoop or Spark.
4. Review business requirement document and develop automation solutions by using Java / SQL and other similar programming languages.
Job Requirement:
1. Master or above Degree in Computer Science, Computer or Software Engineering, or related field.
2. At least 3+ years of experiences Java software development, and the more years, the better.
3. Good communication skills, good language skills in English.
4. Strong Knowledge and skill of MySQL, Ser Server and Postgre SQL.
5. Strong Knowledge and skill in major open-source framework, like SpringBoot and SpringCloud.
6. Knowledge and hands-on experience with some of the latest data technologies and frameworks such as Hadoop, MapReduce, Hive, Spark, Flink, and Kafka. Also, other big data components, understand their characteristics and usage scenarios, and be able to select and develop appropriate big data components based on project requirements.
7. Familiar with project lifecycles and software processes (both functional and non-functional requirement analysis, system design / architecture, implementation, configuration/build management, testing/integration, user acceptance testing, roll-out, maintenance).
8. Familiar with Python, Script, Linux shell script, bash script.
9. AI related skill or experience is preferred.