Python Developer / Data Engineer (internship)

  • Internship
  • Dehradun
  • Applications have closed

Website TriDevSofts

*🚀 Job Title:* Python Developer / Data Engineer (internship)
📍 *Location:* On-Site – IT Park, Dehradun, India
💰*Stipend:* 18k to 25k

*About TriDevSofts*
TriDevSofts is a trusted IT consulting firm delivering world-class data engineering and analytics solutions to global leaders across the Middle East, Europe, and the US. We’re passionate about helping businesses make smarter, data-driven decisions through cutting-edge technology and intelligent design.

At the heart of our mission is innovation, collaboration, and a commitment to excellence. Join a team of skilled professionals dedicated to building scalable, secure, and high-performing data pipelines that drive real impact.

*Role Overview*

We are seeking a *Python Developer / Data Engineer* who thrives on problem-solving and building efficient data systems. This role is ideal for someone who is passionate about Python, data infrastructure, and creating clean, scalable ETL pipelines to fuel advanced analytics.

*Key Responsibilities*

🔧 Python Development
– Develop and maintain robust, efficient Python applications.
– Leverage frameworks like Flask, Django, or FastAPI for API and backend services.
– Debug, optimize, and refactor existing Python codebases.

*💾 Database & Data Management*
– Create complex SQL/PL-SQL queries for data retrieval and manipulation.
– Work with NoSQL databases like MongoDB or Cassandra for unstructured data.

*☁️ Cloud & ETL Pipelines*
– Design and build cloud-native ETL workflows using AWS, Azure, or GCP.
– Use Spark (PySpark) for processing large-scale datasets.
– Ensure data quality, security, and reliability across data pipelines.

*🔗 APIs & Integrations*
– Develop RESTful and GraphQL APIs to integrate with diverse data systems.
– Collaborate with frontend/backend teams for seamless data delivery.

*🚀 CI/CD & DevOps*
– Work with Git, Jenkins, and CI/CD pipelines for code management and deployments.
– Adopt best practices in version control and automated testing.

*🤝 Team Collaboration*
– Partner with cross-functional teams to understand data needs and build solutions.
– Identify bottlenecks and suggest improvements in existing workflows.

*Required Skills & Experience*
– Strong Python skills with experience in Flask, Django, or FastAPI.
– Advanced SQL & PL/SQL knowledge.
– Experience with NoSQL databases like MongoDB or Cassandra.
– Solid understanding of cloud services (AWS, Azure, or GCP).
– Hands-on experience building ETL pipelines and using Spark (PySpark).
– Working knowledge of REST and GraphQL APIs.
– Familiarity with Git, Jenkins, and basic CI/CD processes.

*Preferred Qualifications*
– Bachelor’s or Master’s degree in Computer Science, IT, or a related field.
– Certifications such as AWS Data Analytics or Azure Data Engineer are a plus.
– Strong problem-solving abilities and a team-first mindset.
– Excellent communication skills.

*Why Work With Us?*
– Collaborate with global clients and solve real-world data challenges.
– Be part of a growth-focused, tech-driven environment.
– Competitive compensation and benefits.
– Opportunities for continuous learning and career progression.

*📩 How to Apply*
Interested Students Kindly Apply through our job portal https://jobs.skillcircle.in/
*Best regards,*
*Amisha khanna ,*
*Skillcircle*