Specialization: IT OR COMPUTER SOFTWARE
Job description:
KEY RESPONSIBILITIES - Guide and advise during requirements gathering with our business units. Design solution to fulfill these requirements.
- Design and build new data pipelines for our AWS data lake selecting most appropriate loading technique and technologies (API, streaming, ETL)
- Perform in depth data quality and data profiling analysis on new data sources and ensure the new data sources are fit for purpose.
- Deliver new reports and dashboards to the business. Enrich and develop new metadata layer in SAP BI to support new reporting requirements.
- Provide first level support for SAP BI users.
- Provide training to our end-users on SAP BI technologies and foster the SAP BI communities through use cases and knowledge sharing.
- Provide first level support for ETL loading to AWS.
- Technology watch on AWS technologies and optimization of existing data pipelines.
- Establish business cases to initiate new data project and initiative. Help identify and evaluate the right partner and supervise the partner during implementation.
- Review data engineering deliverables from partners to ensure best practices are applied and ensure the code quality is acceptable.
QUALIFICATIONS - Bachelor in any of these disciplines – Information Technology, Data Science, Actuarial Science. Advantage to have master’s degree in relevant discipline.
- Minimum of 8 years of hands-on experience of data integration and analytics in a corporate or consulting setting
- Requires strong technical skills - engineering, computer science, coupled with the ability to code design, develop, and deploy sophisticated applications using advanced unstructured and semi-structured data analysis techniques and utilizing high-performance computing environments.
- Exposure to the General Insurance industry is also expected.
- Requires strong technical skills - engineering, computer science, coupled with the ability to code design, develop, and deploy sophisticated applications using advanced unstructured and semi-structured data analysis techniques and utilizing high-performance computing environments on Experience in AWS Glue, Redshift, Lambda
- Hands-on Experience in python, spark and pyspark coding
- Perform data Integration on traditional and cloud environments
Apply Now
Back to Job Vacancies
|