Resume Score
CV/Résumé Score
  • Expertini Resume Scoring: See how well your CV/Résumé matches this job: Data Engineer – Etl & Pipeline Development (5 To 10 Yrs).
Expertini

Urgent! Data Engineer – Etl & Pipeline Development (5 To 10 Yrs) Job | AIMLEAP

Data Engineer – Etl & Pipeline Development (5 To 10 Yrs)



Job description

Data Engineer – ETL & Pipeline Development

Experience: 5 to 10 Years

Location: Remote (Work from Home) / Bangalore / India

Mode of Engagement: Full-time

No. of Positions: 5

Educational Qualification: B.E / B.Tech / M.Tech / MCA / Computer Science / IT

Industry: IT / Data / AI / LegalTech / Enterprise Solutions

Notice Period: Immediate Joiners Preferred


What We Are Looking For:

  • 5–10 years of hands-on experience in designing, developing, and deploying end-to-end data pipelines and ETL workflows — not just using prebuilt tools, but building from scratch using Python and SQL.
  • Strong command of Python programming for data transformation, orchestration, and automation (e.G., using Pandas, Airflow, or custom schedulers).
  • Solid experience in writing complex SQL queries, optimizing database performance, and designing schemas for large-scale systems.
  • Proven experience integrating RESTful APIs for data ingestion, transformation, and delivery pipelines.
  • Working knowledge of AWS / GCP / Azure for data storage, processing, and deployment (S3, EC2, Lambda, BigQuery, etc.).
  • Practical exposure to Docker, Kubernetes, and CI/CD pipelines for automating and deploying ETL and data workloads.
  • Familiarity with AI-driven data pipelines or automation workflows is an added advantage.


Responsibilities:

  • Design, architect, and build ETL pipelines from the ground up to extract, clean, transform, and load data across multiple systems.
  • Develop and deploy custom Python-based data frameworks to automate workflows and improve data quality.
  • Build and maintain high-performance SQL queries and database structures to support analytics and AI teams.
  • Develop and integrate API-based data ingestion systems (internal and external).
  • Deploy and manage workloads using Docker, Kubernetes, and CI/CD tools ensuring high availability and version control.
  • Work closely with product, AI, and analytics teams to deliver intelligent, automated data solutions.
  • Implement data validation, monitoring, and alerting mechanisms for production pipelines.
  • Continuously optimize pipeline performance, cost efficiency, and scalability in cloud environments.


Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • Proficiency in Python (FastAPI, Flask, or Django), SQL, and REST API design.
  • Strong understanding of ETL principles, data modeling, and pipeline orchestration (Airflow / Prefect / Dagster).
  • Experience working with AWS (S3, Lambda, EC2, Glue, Athena) or equivalent GCP/Azure components.
  • Hands-on exposure to Docker, Kubernetes, and Git-based CI/CD workflows.
  • Excellent problem-solving, debugging, and analytical skills with an ownership mindset.


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide


Advance your career or build your team with Expertini's smart job platform. Connecting professionals and employers in Vapi, India.