Job Title | Location | Description | Last Seen & URL |
---|---|---|---|
Principal, Data Engineering (Remote)
Jazz Pharmaceuticals |
Philadelphia, PA
|
If you are a current Jazz employee please apply via the Internal Career site. Jazz Pharmaceuticals is a global biopharma company whose purpose is to innovate to transform the lives of patients and their families. We are dedicated to developing life-changing medicines for people with serious diseases — often with limited or no therapeutic options. We have a diverse portfolio of marketed medicines including leading therapies for sleep disorders and epilepsy and a growing portfolio of cancer treatments. Our patient-focused and science-driven approach powers pioneering research and development advancements across our robust pipeline of innovative therapeutics in oncology and neuroscience. Jazz is headquartered in Dublin Ireland with research and development laboratories manufacturing facilities and employees in multiple countries committed to serving patients worldwide. Please visit www.jazzpharmaceuticals.com for more information. The Principal will be responsible for supporting complex or leading singular projects related to data engineering requirements and initiatives across Jazz Research and Development. The Principal will support data projects from across the business including Clinical Pre-Clinical Non-Clinical Chemistry RWD and Omics. Essential Functions Support the design development and maintenance of data pipelines for processing Research and Development data from diverse sources (Clinical Trials Medical Devices Pre-Clinical Omics Real World Data) utilizing the AWS technology platform. Create and optimize ETL/ELT processes for structured and unstructured data using Python R SQL AWS services and other tools. Build and maintain data repositories using AWS S3 and FSx technologies. Establish data warehousing solutions using Amazon Redshift. Build and maintain standard data models. Develop data quality frameworks validation processes and KPIs to ensure accuracy and consistency of data pipelines. Implement data versioning and lineage tracking to support data traceability regulatory compliance and audit requirements. Create and maintain documentation for data processes architectures and workflows. Implement modern software development best practices (e.g. Code Versioning DevOps CD/CI). Support collaboration with RnD Researchers Data scientists and Stakeholders to understand data requirements and deliver appropriate solutions in a global working model. Maintain compliance with data privacy regulations such as HIPAA GDPR May be required to develop deliver or support data literacy training across R&D. Required Knowledge Skills and Abilities Strong knowledge of data engineering tools such as Python R and SQL for data processing. Strong proficiency with AWS services particularly S3 Redshift FSx Glue Lambda. Strong proficiency with relational databases. Strong background in data modeling and database design. Familiarity with unstructured database technologies (e.g. NoSQL) and other database types (e.g. Graph). Familiarity with Containerization such as Docker and EKS/Kubernetes. Familiarity with one or more RnD research process and associated regulatory requirements. Exposure to healthcare data standards (CDISC HL7 FHIR SNOMED CT OMOP DICOM). Exposure to big data technologies and handling. Knowledge of machine learning operations (MLOps) and model deployment. Strong problem-solving and analytical abilities. Excellent communication skills for collaborating with stakeholders. Experience working in an Agile development environment. Required/Preferred Education Bachelor’s Degree in Computer Science Statistics Mathematics Life Sciences or other relevant scientific fields Master’s Degree preferred 3-5 years of experience in data engineering with at least 1.5 years focusing on healthcare research or clinical related data Description of Physical Demands Occasional mobility within office environment Routinely sitting for extended periods of time Constantly operating a computer printer telephone and other similar office machinery Jazz Pharmaceuticals is an equal opportunity/affirmative action employer and all qualified applicants will receive consideration for employment without regard to race color religion sex national origin disability status protected veteran status or any characteristic protected by law. FOR US BASED CANDIDATES ONLY Jazz Pharmaceuticals Inc. is committed to fair and equitable compensation practices and we strive to provide employees with total compensation packages that are market competitive. For this role the full and complete base pay range is: $132000.00 - $198000.00 Individual compensation paid within this range will depend on many factors including qualifications skills relevant experience job knowledge and other pertinent factors. The goal is to ensure fair and competitive compensation aligned with the candidate's expertise and contributions within the established pay framework and our Total Compensation philosophy. Internal equity considerations will also influence individual base pay decisions. This range will be reviewed on a regular basis. At Jazz your base pay is only one part of your total compensation package. The successful candidate may also be eligible for a discretionary annual cash bonus or incentive compensation (depending on the role) in accordance with the terms of the Company's Global Cash Bonus Plan or Incentive Compensation Plan as well as discretionary equity grants in accordance with Jazz's Long Term Equity Incentive Plan. The successful candidate will also be eligible to participate in various benefits offerings including but not limited to medical dental and vision insurance 401k retirement savings plan and flexible paid vacation. For more information on our Benefits offerings please click here: https://careers.jazzpharma.com/benefits.html.
|
2025-06-13 14:02
|
Sr AI Data Engineer (Remote )
Cognisol |
|
Title: Sr AI Data Engineer Location: Pittsburgh PA (Remote) Duration : 6+ Months 1099 Rate : $57/Hr W2 Rate : $47/ Hr Job Description We are seeking a highly skilled AI Data Engineer to join our team for Project Acuity. The ideal candidate will have a strong background in AI and data engineering with the ability to provide solution architecture for AI use cases. This is a contract position with strict timelines and we are looking for someone who can join the project immediately. Key Responsibilities Design and implement solution architecture for AI use cases in Project Acuity. Develop algorithms to calculate scores based on previous clinical trial performance history and engagement. Categorize medical diagnoses into body systems medications treatments diagnoses and high-level therapeutic areas. Collaborate with cross-functional teams to ensure seamless integration of AI solutions. Ensure data integrity security and compliance with relevant regulations. Optimize data processing workflows for efficiency and scalability. Provide technical leadership and mentorship to junior team members. Qualifications Bachelor's or Master's degree in Computer Science Data Science AI or a related field. Minimum of 5 years of experience as an AI Data Engineer or similar role. Strong knowledge of AI machine learning and data engineering principles. Proficiency in programming languages such as Python R or Java. Experience with cloud platforms (AWS Azure GCP) and big data technologies (Hadoop Spark). Familiarity with healthcare data standards and regulations (e.g. HIPAA). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Preferred Qualifications Experience in the healthcare or clinical trials industry. Knowledge of natural language processing (NLP) techniques. Certification in AI or data engineering. Skills: rsparkjavaai data engineerazurehadoopnlpsolution architecturepythondata engineeringgcpaimachine learningaws
|
2025-06-13 01:28
|
Data Engineer
yuno |
Bogota
|
Remote LATAM Full Time Individual Contributor +3 years of experienceWho We AreAt Yuno we are building the payment infrastructure that enables all companies to participate in the global market. Founded by a team of seasoned experts in the payments and IT industries Yuno provides a high-performance payment orchestrator. Our technology offers companies access to leading payment capabilities allowing them to engage customers confidently and maintain global business operations with seamless payment integrations worldwide.Shape your future with Yuno!We are orchestrating the best high-performing team!If you’re a Data Engineer specialized in ETL's and you enjoy solving complex problems with code developing and you are not afraid of learning new things we are looking for you.As a SR Data Engineer you will be part of the team delivering the different parts of our production-ready product while co-designing and implementing an architecture that can scale up with the product and the company.🟣 Your challenge at Yuno • Implement any type of extraction ( Manuals mails SFTP and Apis)Build solutions for application integrations task automation and any relevant data automation using proven design patterns.• Design and build data processing pipelines for large volumes of data that are performant and scalable.• Build and maintain the infrastructure required for extraction loading transformation and storage of data from multiple data sources using custom scripted.• Collaborate with the team to develop and maintain a robust and scalable data infrastructure to support our data needs.• Implement and enforce data governance policies and best practices to ensure data quality security and compliance.• Manage and optimize data warehousing solutions for efficient storage and retrieval of data.• Develop and maintain data lake solutions for storing and managing diverse data types.• Use big data technologies and frameworks to process and analyze large datasets efficiently.• Work with distributed data systems and technologies to handle high volumes of data.• Integrate data from multiple sources.🟣 Skills you needMinimum Qualifications• Proven experience as a Data Engineer or similar role in a data-intensive environment.• Strong proficiency in Python and SQL.•Knowledge of data infrastructure design and management.• Familiarity with data governance principles and practices.• Experience with ETL processes and tools.• Proficiency in working with Data Warehouses or Data Lakes.• Familiarity with big data technologies and distributed data systems.• Ability to integrate data from multiple sources.• Knowledge of Spark is a plus.• Verbal and written English fluency.🟣 What we offer at Yuno• Competitive Compensation• Remote work - You can work from everywhere!• Home Office Bonus - We offer a one time allowance to help you create your ideal home office.• Work equipment• Stock options• Health Plan wherever you are• Flexible Days off • Language Professional and Personal growth courses ➡ ➡
|
2025-06-12 18:38
|
Data & AI Cloud Engineer—GCP (remote CZ)
revolgy |
Prague, n a (Remote)
|
As a Data & AI Cloud Engineer at Revolgy you will be a key contributor to the design development and deployment of data and AI solutions on Google Cloud Platform (GCP). You will leverage your expertise in data engineering AI and cloud technologies to help our customers transform their data into actionable insights. This is an excellent opportunity to deepen your technical skills work on diverse projects and grow your career within a dynamic and supportive team. Location 🌍 Remote position open to candidates in the Czech Republic. Contract Type 📃 This position is open to candidates seeking employment under a standard employment agreement or engagement as an independent contractor under a business-to-business (B2B) contractual arrangement About Revolgy ☁️ Revolgy is a leading multinational company providing digital transformation services through online collaboration tools and cloud infrastructure operating on Google Cloud Platform and Amazon Web Services. We are a premier partner of Google and Amazon. We serve over 2000 customers around the globe. What Will You Do? 🔧 Contribute to the design of data solutions and assist with their implementation on GCP. Develop and deploy AI/ML models using Vertex AI applying your knowledge of machine learning techniques. Build and optimise efficient data pipelines to ensure smooth data flow and processing. Implement data quality monitoring processes to maintain data accuracy and reliability. Help automate infrastructure and deployment tasks to improve efficiency and scalability. Troubleshoot technical issues related to data pipelines AI models and cloud infrastructure. Create clear and concise technical documentation for solutions and processes. What You Bring 🎯 Demonstrated experience with Google Cloud Platform (GCP) services including BigQuery BQ ML Dataflow and Vertex AI. Working knowledge of data engineering specifically building and maintaining data pipelines. You are proficient in programming languages such as Python. You are familiar with fundamentals of AI/ML concepts and tools such as Vertex AI. You understand data governance principles and best practices. You have strong problem-solving skills and the ability to analyze and resolve technical challenges. You possess excellent communication skills and can collaborate effectively within a team. You have knowledge of DevOps concepts including CI/CD and infrastructure as code. Why Join Revolgy? 🌎 Fully remote setup & flexible remote-first culture 🏖️ 5 weeks of paid vacation & sick days 🏠 Budget for work equipment and energy costs 📱 Mobile phone & mobile tariff contribution 💻 Company notebook 🏋️♀️ Multisport card or Pluxee Flexi Card👥 Growth & learning – access to continuous training certifications and career development opportunities.🔥 Challenging environment – work on complex high-impact cloud projects that push the boundaries of innovation. ✈️ Travel to partner events company gatherings and others. Equal Opportunity Employer 🌈 Diversity and equal opportunity are important to us. We are happy about the interest of all candidates and strive to provide feedback as quickly as possible.
|
2025-06-12 18:02
|
Data & AI Cloud Engineer—GCP (remote Sweden)
revolgy |
Stockholm, n a (Remote)
|
As a Data & AI Cloud Engineer at Revolgy you will be a key contributor to the design development and deployment of data and AI solutions on Google Cloud Platform (GCP). You will leverage your expertise in data engineering AI and cloud technologies to help our customers transform their data into actionable insights. This is an excellent opportunity to deepen your technical skills work on diverse projects and grow your career within a dynamic and supportive team. Location 🌍 Remote position open to candidates in Sweden. Contract Type 📃 Engagement as an independent contractor under a business-to-business (B2B) contractual arrangement. About Revolgy ☁️ Revolgy is a leading multinational company providing digital transformation services through online collaboration tools and cloud infrastructure operating on Google Cloud Platform and Amazon Web Services. We are a premier partner of Google and Amazon. We serve over 2000 customers around the globe. What Will You Do? 🔧 Contribute to the design of data solutions and assist with their implementation on GCP. Develop and deploy AI/ML models using Vertex AI applying your knowledge of machine learning techniques. Build and optimise efficient data pipelines to ensure smooth data flow and processing. Implement data quality monitoring processes to maintain data accuracy and reliability. Help automate infrastructure and deployment tasks to improve efficiency and scalability. Troubleshoot technical issues related to data pipelines AI models and cloud infrastructure. Create clear and concise technical documentation for solutions and processes. What You Bring 🎯 Demonstrated experience with Google Cloud Platform (GCP) services including BigQuery BQ ML Dataflow and Vertex AI. Working knowledge of data engineering specifically building and maintaining data pipelines. You are proficient in programming languages such as Python. You are familiar with fundamentals of AI/ML concepts and tools such as Vertex AI. You understand data governance principles and best practices. You have strong problem-solving skills and the ability to analyze and resolve technical challenges. You possess excellent communication skills and can collaborate effectively within a team. You have knowledge of DevOps concepts including CI/CD and infrastructure as code. Why Join Revolgy? 🌎 Fully remote setup & flexible remote-first culture 🚀 Cutting-edge technology stack – work with AI augmentation automation and advanced cloud solutions.👥 Growth & learning – access to continuous training certifications and career development opportunities.🔥 Challenging environment – work on complex high-impact cloud projects that push the boundaries of innovation. ✈️ Travel to partner events company gatherings and others. Equal Opportunity Employer 🌈 Diversity and equal opportunity are important to us. We are happy about the interest of all candidates and strive to provide feedback as quickly as possible.
|
2025-06-12 18:02
|
Data & AI Cloud Engineer—GCP (remote UK)
revolgy |
London, n a (Remote)
|
As a Data & AI Cloud Engineer at Revolgy you will be a key contributor to the design development and deployment of data and AI solutions on Google Cloud Platform (GCP). You will leverage your expertise in data engineering AI and cloud technologies to help our customers transform their data into actionable insights. This is an excellent opportunity to deepen your technical skills work on diverse projects and grow your career within a dynamic and supportive team. Location 🌍 Remote position open to candidates in the UK. Contract Type 📃 This position is open to candidates seeking employment under a standard employment agreement or engagement as an independent contractor under a business-to-business (B2B) contractual arrangement About Revolgy ☁️ Revolgy is a leading multinational company providing digital transformation services through online collaboration tools and cloud infrastructure operating on Google Cloud Platform and Amazon Web Services. We are a premier partner of Google and Amazon. We serve over 2000 customers around the globe. What Will You Do? 🔧 Contribute to the design of data solutions and assist with their implementation on GCP. Develop and deploy AI/ML models using Vertex AI applying your knowledge of machine learning techniques. Build and optimise efficient data pipelines to ensure smooth data flow and processing. Implement data quality monitoring processes to maintain data accuracy and reliability. Help automate infrastructure and deployment tasks to improve efficiency and scalability. Troubleshoot technical issues related to data pipelines AI models and cloud infrastructure. Create clear and concise technical documentation for solutions and processes. What You Bring 🎯 Demonstrated experience with Google Cloud Platform (GCP) services including BigQuery BQ ML Dataflow and Vertex AI. Working knowledge of data engineering specifically building and maintaining data pipelines. You are proficient in programming languages such as Python. You are familiar with fundamentals of AI/ML concepts and tools such as Vertex AI. You understand data governance principles and best practices. You have strong problem-solving skills and the ability to analyze and resolve technical challenges. You possess excellent communication skills and can collaborate effectively within a team. You have knowledge of DevOps concepts including CI/CD and infrastructure as code. Why Join Revolgy? 🌎 Fully remote setup & flexible remote-first culture 🤲 Contribution to pension plan 🏖️ 5 weeks of paid vacation & sick days 🏠 Budget for work equipment and energy costs 📱 Mobile phone & mobile tariff contribution 💻 Company notebook👥 Growth & learning – access to continuous training certifications and career development opportunities.🔥 Challenging environment – work on complex high-impact cloud projects that push the boundaries of innovation. ✈️ Travel to partner events company gatherings and others. Equal Opportunity Employer 🌈 Diversity and equal opportunity are important to us. We are happy about the interest of all candidates and strive to provide feedback as quickly as possible.
|
2025-06-12 18:02
|
Data Engineer (Data Science and AI) [REMOTE]
BAE Systems |
Fort Walton Beach, FL
|
Job Description We are seeking a highly skilled BI Developer with expertise in Artificial Intelligence (AI) and Data Science to join our team. The successful candidate will be responsible for designing developing and deploying data visualizations and business intelligence solutions using Modern BI tools while leveraging AI and machine learning techniques to drive insights and decision-making. If you have a passion for data analysis AI and data visualization and are looking for a challenging and rewarding role we encourage you to apply. Key Responsibilities: Design develop and deploy data visualizations and business intelligence solutions using Power BI Tableau or other tools. Work with stakeholders to understand business requirements and develop solutions that meet their needs Leverage AI and machine learning techniques to analyze and visualize complex data sets Collaborate with data scientists and analysts to integrate AI and machine learning models into Power BI solutions Develop and maintain data pipelines and architectures to support data visualization and business intelligence solutions Ensure data quality security and compliance with organizational standards and policies Stay up-to-date with emerging trends and technologies in AI data science and data visualization Required Education Experience & Skills 3+ years of experience in BI development with a focus on data visualization and business intelligence 2+ years of experience in AI and data science with a focus on machine learning and data analysis Strong understanding of data modeling data warehousing and ETL processes Experience with Power BI tools including Power BI Desktop Power BI Service and Power BI Embedded Proficiency in programming languages such as Python R or SQL Strong proficiency in SQL and database design principles Experience with machine learning frameworks such as scikit-learn TensorFlow or PyTorch Strong understanding of data visualization principles and best practices Excellent communication and collaboration skills with the ability to work with stakeholders and technical teams Bachelor's degree in Computer Science Data Science or a related field + 4 years work experience Preferred Education Experience & Skills Experience with data engineering data warehousing and ETL processes. Knowledge of cloud-based data platforms such as AWS or Azure Experience with data governance and data quality frameworks Certification in Power BI AI or data science such as Microsoft Certified: Data Analyst Associate or Certified Data Scientist Pay Information Full-Time Salary Range: $77814 - $132283 Please note: This range is based on our market pay structures. However individual salaries are determined by a variety of factors including but not limited to: business considerations local market conditions and internal equity as well as candidate qualifications such as skills education and experience. Employee Benefits: At BAE Systems we support our employees in all aspects of their life including their health and financial well-being. Regular employees scheduled to work 20+ hours per week are offered: health dental and vision insurance health savings accounts a 401(k) savings plan disability coverage and life and accident insurance. We also have an employee assistance program a legal plan and other perks including discounts on things like home auto and pet insurance. Our leave programs include paid time off paid holidays as well as other types of leave including paid parental military bereavement and any applicable federal and state sick leave. Employees may participate in the company recognition program to receive monetary or non-monetary recognition awards. Other incentives may be available based on position level and/or job specifics. About BAE Systems Inc. BAE Systems Inc. is the U.S. subsidiary of BAE Systems plc an international defense aerospace and security company which delivers a full range of products and services for air land and naval forces as well as advanced electronics security information technology solutions and customer support services. Improving the future and protecting lives is an ambitious mission but it’s what we do at BAE Systems. Working here means using your passion and ingenuity where it counts – defending national security with breakthrough technology superior products and intelligence solutions. As you develop the latest technology and defend national security you will continually hone your skills on a team—making a big impact on a global scale. At BAE Systems you’ll find a rewarding career that truly makes a difference. This position will be posted for at least 5 calendar days. The posting will remain active until the position is filled or a qualified pool of candidates is identified.
|
2025-06-12 13:19
|
AWS Data Engineer (Associate)
mactores |
Mumbai, MH
|
"Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008 Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated agile and secure. We collaborate with customers to strategize navigate and accelerate an ideal path forward with a digital transformation via assessments migration or modernization.As AWS Data Engineer you are a full-stack data engineer that loves solving business problems. You work with business leads analysts and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision-making. You are passionate about the data quality of our business metrics and the flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills then come join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid ""corporate"" culture focuses on productivity and creativity and allows you to be part of a world-class team while still being yourself. ➡ What you will do? ➡ Write efficient code in - PySpark Amazon Glue Write SQL Queries in - Amazon Athena Amazon RedshiftExplore new technologies and learn new techniques to solve business problems creativelyCollaborate with many teams - engineering and business to build better data products and services Deliver the projects along with the team collaboratively and manage updates to customers on time What are we looking for? ➡ 1 to 3 years of experience in Apache Spark PySpark Amazon Glue2+ years of experience in writing ETL jobs using pySpark and SparkSQL2+ years of experience in SQL queries and stored proceduresHave a deep understanding of all the Dataframe API with all the transformation functions supported by Spark 2.7+ You will be preferred if you have ➡ Prior experience in working on AWS EMR Apache AirflowCertifications AWS Certified Big Data – Specialty OR Cloudera Certified Big Data Engineer OR Hortonworks Certified Big Data EngineerUnderstanding of DataOps Engineering ➡ Life at MactoresWe care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making Leadership Collaboration and Curiosity drive how we work.1. Be one step ahead2. Deliver the best3. Be bold4. Pay attention to the detail5. Enjoy the challenge6. Be curious and take action7. Take leadership8. Own it9. Deliver value10. Be collaborativeWe would like you to read more details about the work culture on https://mactores.com/careers The Path to Joining the Mactores TeamAt Mactores our recruitment process is structured around three distinct stages:Pre-Employment Assessment: You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.Managerial Interview: The hiring manager will engage with you in multiple discussions lasting anywhere from 30 minutes to an hour to assess your technical skills hands-on experience leadership potential and communication abilities.HR Discussion: During this 30-minute session you'll have the opportunity to discuss the offer and next steps with a member of the HR team.At Mactores we are committed to providing equal opportunities in all of our employment practices and we do not discriminate based on race religion gender national origin age disability marital status military status genetic information or any other category protected by federal state and local laws. This policy extends to all aspects of the employment relationship including recruitment compensation promotions transfers disciplinary action layoff training and social and recreational programs. All employment decisions will be made in compliance with these principles.Note: Please answer as many questions as possible with this application to accelerate the hiring process."
|
2025-06-11 17:56
|
Python Developer - Data Engineering
TalentNeuron |
Germany
|
TalentNeuron is the world's leading provider of labor market analytics delivering high-fidelity talent data on an unmatched global scale. TalentNeuron delivers actionable talent insight for every region of the world covering countries that collectively represent more than 90% of the world's GDP. Through deep investments in machine learning and artificial intelligence our technology platform ingests and normalizes hundreds of millions of structured and unstructured data points each day delivering critical talent insights in support of workforce planning strategic skills analysis location optimization DEI tactics and sourcing strategies for local regional and global talent. These insights can be delivered to clients via software as a service data as a service or fully custom research efforts from our team of expert data scientists and advisors.Our Core ValuesHumanity First: We lead with humanity. We foster empathy kindness respect and inclusiveness in all contexts and support one another.Customers at the Core: We engage in meaningful and constant dialogue with clients to deeply understand and anticipate their needs and consistently deliver exceptional value. We operate with integrity and do what’s right for our clients no matter how difficult.Diverse Minds One Team: We are curious and seek different perspectives and find common ground but we act succeed fail and celebrate as one. We openly collaborate communicate debate and compromise across groups.Pioneering Innovation: We take risks fail fast and learn from our experiments. We champion change and evolution without fear and inspire a culture where innovation thrives.Resilient Perseverance: We think creatively and pragmatically to find solutions remove barriers and overcome obstacles. We are equally accountable for the results of the whole team and for our individual commitments and we find a way to get things done by embracing a “yes we can” attitude.We are seeking a Senior Python Developer – Data Engineering with a strong focus on data engineering to join our team. The ideal candidate will have experience designing building and maintaining data pipelines ETL processes and data warehouses. They will also have excellent coding skills in Python and experience with REST API design and development. ➡ What You Will Do ➡ Design and build scalable robust and maintainable data pipelines and ETL processes. Work on our HR Analytics Platform to parse job postings (and other online data) and implement analytics to feed into our SaaS platforms and Market Intelligence projects. Develop and maintain data warehouses and databases using technologies such as Redshift PostgreSQL or similar. Work closely with data scientists and analysts to understand their requirements and ensure data quality and consistency. Collaborate with other developers and engineers to build efficient reliable and performant systems. Write clean efficient and well-documented code in Python. Develop and maintain RESTful APIs to provide access to data and services. Troubleshoot and resolve data-related issues as they arise. Stay up-to-date with the latest technologies and trends in data engineering and REST API development. Environment: ➡ Python Ray Kafka Redshift PostgreSQL FastAPI JupyterHub What You Will Bring: ➡ Bachelor’s or Master’s degree in Computer Science Software Engineering or a related field. At least 3 years of professional experience as a Python developer focused on data engineering. Experience designing and building data pipelines and ETL processes using Python libraries such as pandas NumPy and SciPy. Experience working with databases and data warehousing concepts including data modeling schema design and SQL. Strong coding skills in Python with a focus on code quality maintainability and performance. Experience designing developing and maintaining RESTful APIs using Python frameworks like FastAPI Flask or Django. Strong problem-solving skills and the ability to work independently and in a team environment. Excellent communication and collaboration skills with the ability to work effectively with technical and non-technical stakeholders. ➡ If you’re an Engineer who’s ready to take on a challenging yet rewarding role we encourage you to apply. At TalentNeuron you’ll have the opportunity to work with top-tier professionals and make a significant impact. We believe in nurturing talent fostering a collaborative environment and recognizing the achievements of our team. Join us exceed your goals and be a part of our success story. Apply now and let’s shape the future of TalentNeuron together. We're an equal opportunity employer. All applicants will be considered for employment without attention to race color religion sex sexual orientation gender identity national origin veteran or disability status.
|
2025-06-11 17:23
|
Data Engineer (Python, GCP, IA) - Remote across LATAM
Zarego |
Mexico City, Mexico
|
Role: Data Engineer English level: B2 or higher Remote. Location: Anywhere in Latam At Zarego we are a team of passionate developers dedicated to creating high-quality software solutions. We focus on solving complex problems and delivering innovative results. We are currently seeking an experienced Data Engineer to join our team. What we're looking for: +4 years of experience working professionally as a Data Engineer. Strong proficiency in Python. Solid experience with GCP services English proficiency at B2 level or above. Location: LATAM Nice to have: Experience with AI Bachelor's degree in Computer Science Software Engineering or a related technical field. If you identify with this role don't hesitate to apply. We'd love to tell you more about what we're working on
|
2025-06-11 01:27
|
🔥 +230 more results. Unlock: sign-up / login.
Login & search by job title, a different location + other details.
Powerful custom searches are available once you login.