Responsibilities
-
Design, develop, and optimize robust data pipelines using PySpark and Python to process large datasets
-
Collaborate with cross-functional teams to understand complex business requirements and transform them into scalable technical solutions
-
Utilize Palantir Foundry to build and manage analytics applications that enable strategic and operational insights
-
Manage data integration workflows across distributed computing systems, building high-quality ETL/ELT processes
-
Develop advanced SQL queries for data querying, transformation, and warehousing
-
Stay engaged with Agile methodologies and participate in Scrum ceremonies to align work with the broader project goals
-
Document technical solutions and workflows to ensure knowledge sharing and long-term maintainability
-
Troubleshoot and resolve data processing or platform issues in a fast-paced, production-grade environment
-
Stay up-to-date with the latest advancements in cloud technologies, Big Data processing, and Machine Learning
-
Participate in code reviews and promote best practices in software engineering
Requirements
-
Bachelor’s degree in Computer Science, Data Science, or a related technical field
-
5+ years of experience in Data Integration, with a focus on large-scale distributed computing or analytics systems
-
Palantir Foundry expertise – prior hands-on experience is essential for this role
-
Strong proficiency in Python and PySpark for building scalable data workflows
-
In-depth knowledge and experience in SQL (preferably Spark SQL), enabling efficient data querying and warehousing
-
Experience designing and implementing ETL/ELT processes for large datasets
-
Solid understanding of Scrum and Agile development principles
-
Strong analytical and problem-solving skills, with a strategic mindset for tackling complex challenges
-
Highly self-driven and capable of managing workload independently while delivering on commitments
-
A collaborative mindset, paired with clear and effective communication skills, including experience working in global, multicultural settings
-
Eagerness to learn and stay current with emerging technologies and best practices in data engineering and analytics
Nice to have
-
Familiarity with insurance, financial industries, or finance-related data workflows
-
Knowledge of front-end technologies like HTML, CSS, JavaScript, and build tools like Gradle
-
Experience with Microsoft Power BI for building data dashboards and reports
-
Hands-on experience with Machine Learning or implementing Generative AI models
-
Understanding of statistical models and their applications in data pipelines
-
Exposure to Azure, AWS, or GCP cloud platforms, enabling high-quality engineering solutions
We offer/Benefits
We connect like-minded people:
- Delivering innovative solutions to industry leaders, making a global impact
- Enjoyable working environment, whether it is the vibrant office or the comfort of your own home
- Opportunity to work abroad for up to two months per year
- Relocation opportunities within our offices in 55+ countries
- Corporate and social events
We invest in your growth:
- Leadership development, career advising, soft skills and well-being programs
- Certifications, including GCP, Azure and AWS
- Unlimited access to LinkedIn Learning and Udemy
- Free English classes with certified teachers
-
Discounts in local language schools, including offline courses for the Uzbek language
We cover it all:
- Monetary bonuses for engaging in the referral program
- Medical & family care package
- Four trust days per year (sick leave without a medical certificate)
- Discounts for fitness clubs, dance schools and sports programs
- Benefits package (sports activities, a variety of stores and services)
Ключевые навыки
- SQL
- Azure
- Big Data
- ETL
- GCP (Good Clinical Practice)
- AWS
- Agile
- Английский — C1 — Продвинутый