Remote Airflow Jobs

1779 remote jobs*

Job Title Location Description Posted**
Senior Data Analyst
tiney
Location: UK based/ Remote-1st About tiney - At tiney we’re on a mission to unlock the potential of every child by revolutionising early years education. We believe all children deserve access to high-quality nurturing play-based learning — and that the best way to make this happen is by empowering a new generation of brilliant childminders. Using technology community and creativity we’re building a network of early educators who are trained supported and backed by a system designed to help them (and the children they care for) truly thrive. Founded in 2018 by an experienced team with backgrounds at Teach First Teach For All graze.com and 383 tiney is a software-driven childcare company with ~100 employees. We’ve raised over £12m from top European VCs and we’re already a top 5 childcare provider in the UK by childcare places — processing over £1m in payments every month. In 2024 we were recognised as the 86th fastest growing start-up in the UK & Ireland and we’re proud to be shortlisted for the Global EdTech Prize. Our ambition? To move into the top 3 expand internationally and build the world’s first home-based childcare operating system. Our Values Play is a superpower – Curiosity joy and creativity fuel innovation. Rest is rocket fuel – We look after ourselves and each other to sustain impact. Asking for help shows wisdom – Vulnerability and openness are strengths. Children at the core – Our policies and decisions support the whole person. Default to transparency – Trust comes from openness and honesty. The Role We’re looking for a Senior Data Analyst to join our remote-first UK team. You’ll report to the CTO and work closely with them and the executive team to architect drive and amplify tiney’s company-wide data strategy. This role is central to defining building and scaling a modern data culture—enabling every team to make high-impact insight-led decisions as we expand our platform and community. You’ll act as the lead authority on data: hands-on in the stack strategic in vision and a trusted partner to stakeholders across the business. Location: Remote-first (UK-based) with occasional in-person off-sites and team days in London. What you’ll be doing Data Strategy & Leadership (15%): Establish and evolve our data vision governance and infrastructure to embed data as a foundational asset for every function at tiney. End-to-End Data Architecture (25%): Own the full analytics stack—Dataform in BigQuery Stitch & Census pipelines and Metabase dashboards—ensuring scalability reliability and adoption. Advanced Analytics & Experimentation (25%): Lead complex analysis modelling A/B tests and deep dives into business-critical operational and product data present recommendations at executive level. Executive Stakeholder Partnership (20%): Act as a data advisor to the C-suite and business leads translating complex analyses into actionable strategies for growth and performance. Team Enablement & Data Literacy (10%): Coach and mentor teammates across the business in data best practices build self-serve tools and foster an organisation-wide culture of evidence-based decision making. Continuous Improvement (5%): Audit evolve and innovate our analytics tooling and processes as tiney grows ensuring best-in-class data operations. What we’re looking for We value potential and attitude as much as experience—and for this senior role we also need deep technical and strategic track record. Must-haves: Significant experience (typically 5+ years) designing building and scaling analytics infrastructure and reporting for high-growth companies or complex organisations Advanced SQL and data modelling expertise including hands-on with BigQuery Redshift or Snowflake and frameworks like dbt or Dataform Demonstrable leadership in designing and measuring A/B or multivariate business/product experiments with a track record of translating results into strategy Proven ability to influence executive leadership with data-backed insight drive adoption of new data tools/processes and establish best-practice governance Experience architecting and managing modern ETL/reverse ETL pipelines (e.g. Stitch Census Airflow Fivetran) Mastery of dashboarding/BI platforms (Metabase Looker Tableau Power BI or similar) and a passion for impactful intuitive reporting Outstanding communicator with a history of mentoring analysts and coaching non-technical teams on data interpretation Commercial and entrepreneurial mindset highly self-directed and motivated by business impact UK-based and able to attend periodic sessions in London Nice-to-haves: Experience in the early years EdTech or mission-driven sectors Familiarity with startup or scale-up environments Hands-on experience growing a data function as the first or only analyst/lead Exposure to compliance and data privacy in sensitive or regulated domains What success looks like - 1 month in – You’ve built trust mapped our data estate and scoped your roadmap alongside the execs 3 months in – You’ve architected & delivered the first strategic upgrades to core reporting and enabled a key function to hit a new data milestone 6 months in – You’re leading company-level experiments and are the go-to for insight data tooling and governance 12 months in – You’ve scaled tiney’s data capability shaped strategic outcomes and cemented yourself as a foundational leader in our growth story Who you’ll be working with You’ll join a close-knit cross-functional team of educators technologists and operators with experience from Teach First Babylon Monzo and Google. Together we: Balance focus and fun serious impact and serious play Support each other through feedback and regular 1:1s Celebrate wins (with gifs ukuleles and sometimes cake ) Compensation & Benefits - Company stock options 28 days paid leave (including 4 days during our Christmas closedown) At-home workstation set-up Life and long-term sickness insurance Up to £5k of childcare contributions for working parents A paid reflection day during your first month A day off for your birthday A paid sabbatical after 5 years Access to Self Space therapy sessions Whole-team social activities Interview Process - We like to keep things friendly thoughtful and efficient. Typically: Initial Call – A short video chat with someone from our team Take home task – You’ll prepare something relevant to the role Technical Interview – A deep dive into your task and experience Team Chat – Meet your future teammates Offer – We move fast when we know we’ve found the right person! Our Commitment We’re proud to be an equal opportunity employer. We celebrate diversity foster inclusion and are committed to building a team that reflects the children and communities we serve. If our mission excites you but you’re not sure you meet every requirement — please apply anyway.
23 min(s). ago
View
Data Engineer
Pantheon Data
Leesburg, VA
"Company Overview Pantheon Data (a Kenific Holding company) is a private small business based in the Washington DC area. Pantheon Data was founded in 2011 initially providing acquisition and supply chain management services to the US Coast Guard. Our service offerings have grown in the past ten years including infrastructure resiliency contact center operations information technology software engineering program management strategic communications engineering and cybersecurity. We have also grown our customer base to include commercial clients. The company has used this experience to expand our service offerings to other agencies within the Department of Homeland Security (DHS) the Department of Defense (DoD) and other Federal Civilian Agencies. Position Overview Pantheon Data is seeking a Data Engineer with a background in architecture of data-centric technical solutions who is technically proficient and eager to learn. A successful candidate should have a history of hands-on experience with data integration and/or data migration and ideally with data warehousing ETL pipelines. This team will be supporting a ""cloud-first"" migration for various legacy databases and applications. They will be able to communicate clearly and effectively with other engineers as well as less technical team members (e.g. project managers). Responsibilities Data profiling on RDBMS databases (using T-SQL) to inform mapping rules and validate data requirements. Work within a growing engineering team to build and support technical solutions across customers and projects. Collaborate with team leads/members to design and develop solutions that exceed customer expectations. Required Skills and Experience A Bachelor of Science (BS) degree in Information Technology Cybersecurity Data Science Information Systems or Computer Science from an ABET accredited or CAE designated institution. Security+ certification 5 years professional hands-on experience in a data engineering role. Expert-level SQL relational databases. Experience with ETL or data migration. Excellent verbal communication skills with the ability to interact clearly and succinctly in peer review and design sessions. Willing to learn/contribute meaningfully across technical areas of our projects. Working knowledge of CI/CD methodology and related tools for database development. Working knowledge of Agile Scrum methodology. Detail-oriented self-motivated and organized Ability to work effectively remotely in cross-functional teams. Ability to meet deadlines and produce quality work. Proficient in Microsoft Suite software including Outlook Word Excel SharePoint and PowerPoint. Preferred Skills and Experience Strong experience with Python. Experience writing ETL source-target documentation. Experience with legacy ETL tools such as Informatica and SSIS. Experience with cloud-based ETL tools such as DBT and Airflow. AWS (Amazon Web Services) and/or Microsoft Azure. Certification(s) a plus. Experience with cloud-hosted databases either AWS or Azure. Experience using big data technologies (Hadoop Hive HBase Spark EMR etc.) Experience with AI/ML a plus. Clearance Requirements U.S. Citizenship with the ability to obtain and maintain a DoD Secret clearance. Work Location: United States - Remote Our company prioritizes the benefits of flexibility and collaboration whether that happens in person or remotely. If the position is remote or hybrid you may periodically work from a Pantheon Data office location or client site. If this position is assigned to a Pantheon Data office location or client site you'll work with colleagues and clients in person as needed for specific client requirements. Compensation The salary range for this position is $100000 - $132000. This is not however a guarantee of compensation or salary. Rather salary will be set based on experience geographic location and possibly contractual requirements and could fall outside of this range. Benefits Overview We are always looking for good people! Pantheon Data is committed to providing its employees with competitive salaries and benefits in order to increase employee satisfaction and productivity.In addition to our benefits we also offer SmartBenefits through the Washington Metro Area Transportation Authority where you specify an amount of your pre-tax wages be paid directly to your SmarTrip account. In some cases tuition assistance may be available for continuing education expenses and certifications related to their position. Additional details may be found at https://pantheon-data.com/careers/ Pantheon Data Important Information All qualified applicants will be considered for employment without regard to disability status as a protected veteran or any other status protected by applicable federal state local or international law. As part of the application process you are expected to be on camera during interviews and assessments. We reserve the right to take your picture to verify your identity and prevent fraud. If you require reasonable accommodation in completing this application interviewing completing any pre-employment testing or otherwise participating in the employee selection process please direct your inquiries to our Talent Team at Recruiting@pantheon-data.com or by phone (571) 363-4020. This company uses E-Verify to confirm each employee's work authorization. For more information click here E-Verify Participation Poster"
40 min(s). ago
View
Data Science Engineer I - US
Rackspace
Remote
Job Summary: We are expanding our team of motivated technologists to build AI and ML solutions for our customer. Specifically looking for an ML Engineer who is passionate about helping customers build Data Science and AI/ML solutions at scale. Your insight and expertise will help our delivery teams build ML solutions and build solutions across Data Science Machine learning Generative AI databases security and automation. In addition you will work with mid-tier technologies that include application integration security and much more! This position is ideal for candidates with a strong foundation in machine learning principles data processing and software engineering. You will support the design development and deployment of ML models and pipelines as well as assist in ingesting and transforming data for machine learning use cases. Work Location: Remote ### Key Responsibilities: + Assist in developing training and validating machine learning models for real-world applications (e.g. classification prediction and recommendation systems). + Build and maintain data ingestion pipelines from structured and unstructured sources using Python and SQL-based tools + Perform data cleaning normalization and feature engineering to prepare high-quality datasets for ML training and evaluation. + Collaborate on ML projects such as outcome prediction systems image classification models and intelligent search interfaces. + Contribute to building interactive applications by integrating ML models into frontend/backend systems (e.g. React Django REST APIs). + Participate in MLOps workflows including model versioning basic deployment tasks and experiment tracking. + Document data flows ML experiments and application logic consistently. + Attend Agile meetings and collaborate with peers through code reviews and sprint activities. ### Required Qualifications: + Bachelor’s degree in Computer Science Data Science Statistics Engineering or a related field. + Experience in machine learning data engineering or software development roles (internships or academic projects acceptable). + Solid understanding of supervised learning classification and data preprocessing techniques. + Experience with data engineering concepts including SQL PostgreSQL and REST API integration + Basic knowledge of data ingestion and transformation concepts. + Proficiency in Python and common ML libraries (e.g. scikit-learn pandas NumPy TensorFlow or PyTorch). + Familiarity with full-stack or web-based ML applications (e.g. React Django or Android Studio projects). + Familiarity with SQL and data wrangling tools. + Experience with version control tools like Git. + Strong problem-solving skills and attention to detail. + Effective communication and documentation skills. + Enthusiasm for learning new tools and growing within a collaborative team environment ### Preferred Qualifications: + Exposure to cloud platforms such as AWS GCP or Azure. + Experience with pyton Spark Airflow or data pipeline frameworks. + Understanding of basic data architecture concepts (e.g. data lakes warehouses). + Participation in ML/DS projects hackathons or Kaggle competitions. ### Sponsorship + This role is not sponsorship eligible + Candidates need to be legally allowed to work in the US for any employer The following information is required by pay transparency legislation in the following states: CA CO HI NY and WA. This information applies only to individuals working in these states. The anticipated pay range for Colorado is: $ 69900 - $102520 The anticipated starting pay range for California New York City and Washington is: $ 81500 - 119460 Based on eligibility compensation for the role may include variable compensation in the form of bonus commissions or other discretionary payments. These discretionary payments are based on company and/or individual performance and may change at any time. Actual compensation is influenced by a wide array of factors including but not limited to skill set level of experience licenses and certifications and specific work location. Information on benefits offered is here. About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges designing solutions that scale building and managing those solutions and optimizing returns into the future. Named a best place to work year after year according to Fortune Forbes and Glassdoor we attract and develop world-class talent. Join us on our mission to embrace technology empower customers and deliver the future. More on Rackspace Technology Though we’re all different Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age color disability gender reassignment or identity or expression genetic information marital or civil partner status pregnancy or maternity status military or veteran status nationality ethnic or national origin race religion or belief sexual orientation or any legally protected characteristic. If you have a disability or special need that requires accommodation please let us know. #LI-RL1 #US-Remote
43 min(s). ago
View
AI/NLP Engineer
LeoTech
Irvine, CA
At LeoTech we are passionate about building software that solves real-world problems in the Public Safety sector. Our software has been used to help the fight against continuing criminal enterprises drug trafficking organizations identifying financial fraud disrupting sex and human trafficking rings and focusing on mental health matters to name a few. As an AI/NLP Engineer on our Data Science team you will be at the forefront of leveraging Large Language Models (LLMs) and cutting-edge AI techniques to create transformative solutions for public safety and intelligence workflows. You will apply your expertise in LLMs Retrieval-Augmented Generation (RAG) semantic search Agentic AI GraphRAG and other advanced AI solutions to develop enhance and deploy robust features that enable real-time decision-making for our end users. You will work closely with product engineering and data science teams to translate real-world problems into scalable production-grade solutions. This is an individual contributor (IC) role that emphasizes technical depth experimentation and hands-on engineering. You will participate in all phases of the AI solution lifecycle from architecture and design through prototyping implementation evaluation productionization and continuous improvement. ### Core Responsibilities Design build and optimize AI-powered solutions using LLMs RAG pipelines semantic search GraphRAG and Agentic AI architectures. Implement and experiment with the latest advancements in large-scale language modeling including prompt engineering model fine-tuning evaluation and monitoring. Collaborate with product backend and data engineering teams to define requirements break down complex problems and deliver high-impact features aligned with business objectives. Inform robust data ingestion and retrieval pipelines that power real-time and batch AI applications using open-source and proprietary tools. Integrate external data sources (e.g. knowledge graphs internal databases third-party APIs) to enhance the context-awareness and capabilities of LLM-based workflows. Evaluate and implement best practices for prompt design model alignment safety and guardrails for responsible AI deployment. Stay on top of emerging AI research and contribute to internal knowledge-sharing tech talks and proof-of-concept projects. Author clean well-documented and testable code participate in peer code reviews and engineering design discussions. Proactively identify bottlenecks and propose solutions to improve system scalability efficiency and reliability. ### What We Value Bachelor's or Master's degree in Computer Science Artificial Intelligence Data Science or a related field. 5+ years of hands-on experience in applied AI NLP or ML engineering (with at least 2 years working directly with LLMs RAG semantic search and Agentic AI). Deep familiarity with LLMs (e.g. OpenAI Claude Gemini) prompt engineering and responsible deployment in production settings. Experience designing building and optimizing RAG pipelines semantic search vector databases (e.g. ElasticSearch Pinecone) and Agentic or multi-agent AI workflows in in large scale production setup. Exposure to MCP and A2A protocol is a plus. Exposure to GraphRAG or graph-based knowledge retrieval techniques is a strong plus. Strong proficiency with modern ML frameworks and libraries (e.g. LangChain LlamaIndex PyTorch HuggingFace Transformers). Ability to design APIs and scalable backend services with hands-on experience in Python. Experience building deploying and monitoring AI/ML workloads in cloud environments (AWS Azure) using services like AWS SageMaker AWS Bedrock AzureAI etc. Experience with tools to load balance different LLMs providers is a plus. Familiarity with MLOps practices CI/CD for AI model monitoring data versioning and continuous integration. Demonstrated ability to work with large complex datasets perform data cleaning feature engineering and develop scalable data pipelines. Excellent problem-solving collaboration and communication skills able to work effectively across remote and distributed teams. Proven record of shipping robust high-impact AI solutions ideally in fast-paced or regulated environments. ### Technologies We Use Cloud & AI Platforms: AWS (Bedrock SageMaker Lambda) AzureAI Pinecone ElasticCloud Imply Polaris. LLMs & NLP: HuggingFace OpenAI API LangChain LlamaIndex Cohere Anthropic. Backend: Python (primary) Elixir (other teams). Data Infrastructure: ElasticSearch Pinecone Weaviate Apache Kafka Airflow. Frontend: TypeScript React. DevOps & Automation: Terraform EKS GitHub Actions CodePipeline ArgoCD. Monitoring & Metrics: Grafana (metrics dashboards alerting) Langfuse (Agentic AI observability prompt management) Testing: Playwright for end-to-end test automation. Other Tools: Mix of open-source and proprietary frameworks tailored to complex real-world problems. ### What You Can Expect Enjoy great team camaraderie whether at our Irvine office or working remotely. Thrive on the fast pace and challenging problems to solve. Modern technologies and tools. Continuous learning environment. Opportunity to communicate and work with people of all technical levels in a team environment. Grow as you are given feedback and incorporate it into your work. Be part of a self-managing team that enjoys support and direction when required. 3 weeks of paid vacation – out the gate!! Competitive Salary. Generous medical dental and vision plans. Sick and paid holidays are offered. Work with talented and collaborative co-workers. LeoTech is an equal opportunity employer and does not discriminate on the basis of any legally protected status.
43 min(s). ago
View
Full Stack Software Engineer - Help Experience
New Relic
Portland, OR
We are a global team of innovators and pioneers dedicated to shaping the future of observability. At New Relic we build an intelligent platform that empowers companies to thrive in an AI-first world by giving them unparalleled insight into their complex systems. As we continue to expand our global footprint we're looking for passionate people to join our mission. If you're ready to help the world's best companies optimize their digital applications we invite you to explore a career with us! Your opportunity Are you an experienced full-stack engineer who's passionate about building tools that help customers love a product? At New Relic our Help Experience team is an innovation team that builds cutting-edge tools to help customers troubleshoot and resolve issues with the power of AI. We’re seeking an engineer to join us in tackling challenging problems in an engaging collaborative environment. This is a unique opportunity to own the full stack of our diagnostic tools. You’ll be working with a modern tech stack—Typescript and React on the front end and a backend primarily built with TypeScript and Go with some services in Python. We are a highly collaborative team that works together on solving problems whether in person or over Zoom. We believe in building a team that cultivates and celebrates diversity and we are committed to equity and inclusion in everything we do. What you'll do Build and improve user interface code with TypeScript and React. Develop new features and maintain existing functionality for our diagnostic tools primarily in Go. Work on properties owned by the Help Experience team including coding in Python and integrating with third-party APIs. Collaborate with other engineering teams to develop platforms that enhance our customer service experience. Apply a customer-focused mindset to every problem measuring your success by the impact your work has on our customers' experience with New Relic. Share responsibility for owning and operating our AWS environments. Prototype and drive new feature development partnering with internal users and other engineering teams to bring ideas to life. Participate in an on-call rotation ensuring reliability scalability and simplicity in everything you build to maintain a healthy work-life balance. This role requires Fluency in JavaScript and TypeScript with hands-on experience delivering production-ready front ends using React. At least 2 years of experience in backend services and API development in at least one of the following languages: Go Python or Java. Database fluency including strong knowledge of SQL data structures normalization indexing and data hygiene. Experience with AWS cloud deployments and current build tooling. Fluency with Git and a solid understanding of how to maintain a development environment. Proven dedication to code quality including experience writing comprehensive unit and integration tests. Strong communication and collaboration skills with the ability to be bold in your ideas while also being open to feedback and acknowledging your strengths and weaknesses. A high level of emotional intelligence demonstrated through an ability to communicate clearly empathize with team members and resolve conflicts constructively. Bonus points if you have Familiarity with business systems such as Salesforce. Familiarity with Airflow for workflow management. Experience working with Vector DBs LLMs RAG MCP and machine learning in general. Initiative to drive and amplify cross-team solutions. Experience with containers. Experience building services or applications where performance is a key consideration. Windows .Net build experience (we do a small amount of Windows-related work). Please note that visa sponsorship is not available for this position. #LI-GK1 #LI-Remote The pay range below represents a reasonable estimate of the salary for the listed position. This role is eligible for a corporate bonus plan. Pay within this range varies by work location and may also depend on job-related factors such as an applicant’s skills qualifications and experience. New Relic provides a variety of benefits for this role including healthcare dental vision parental leave and planning and mental health benefits a 401(k) plan and match flex time-off 11 paid holidays volunteer time-off and other competitive benefits designed to improve the lives of our employees. Estimated Base Pay Range $106000 - $133000 USD Fostering a diverse welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success including fully office-based fully remote or hybrid. Our hiring process In compliance with applicable law all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including but not limited to the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. New Relic develops and distributes encryption software and technology that complies with U.S. export controls and licensing requirements. Certain New Relic roles require candidates to pass an export compliance assessment as a condition of employment in any global location. If relevant we will provide more information later in the application process. Candidates are evaluated based on qualifications regardless of race religion ethnicity national origin sex sexual orientation gender expression or identity age disability neurodiversity veteran or marital status political viewpoint or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy
44 min(s). ago
View
Backend Engineer (Golang + Kafka)
Fitnext
Remote Argentina
Backend Engineer (Golang + Kafka) Location: Remote (LATAM preferred) Experience: 4+ years Level: Mid-level ### About the Opportunity We are looking for a Backend Engineer to join a high-impact Ads organization team. You will help design and build scalable backend systems powering ad safety verification and review processes. This role involves working with distributed systems data pipelines and modern cloud-native tools. ### Team Focus Ad Reviews: Develop automated systems to review ads before going live. Brand Safety Signals: Build scalable classification systems to ensure safe ad placements. Verification: Integrate with trusted partners to enhance advertiser confidence and unlock revenue growth. ### Tech Stack You'll Work With Languages: Go (primary) Python Scala Frameworks: Spark Thrift Kafka Data Stores: Postgres BigQuery Redis Druid Tools: Kubernetes Airflow Docker ### Key Responsibilities Work with product managers to design and implement Ads-related backend products. Improve operational stability by enhancing code quality observability and monitoring. Deliver dashboards metrics and data visualizations to track performance. Build extensible backend components aligned with product goals. Collaborate with senior engineers on the design of complex solutions while independently driving implementation. ### Requirements 4+ years of experience as a backend engineer. Strong coding skills in Golang (primary). Hands-on experience with Kafka for messaging and distributed systems. Solid understanding of APIs data stores and cloud-native environments. Comfortable working in fast-paced agile environments. Strong problem-solving and communication skills. Nice to Have: Experience with Python or Scala. Knowledge of Spark Airflow or container orchestration with Kubernetes. ### Hiring Process Live Coding Round (60 min) – Focused on Golang + Kafka. Introductory Chat (15 min). Interview with Hiring Manager (45 min). Client Technical Round. ### What We Offer 100% remote international role. Work with cutting-edge technologies in distributed systems and data engineering. Collaborative multicultural and fast-paced environment.
4 hour(s) ago
View
Customer Reliability Engineer, Infrastructure (Atlantic)
Astronomer
Remote United States
"Location Remote (United States) Employment Type Full time Location Type Remote Department Customer Astronomer empowers data teams to bring mission-critical software analytics and AI to life and is the company behind Astro the industry-leading unified DataOps platform powered by Apache Airflow. Astro accelerates building reliable data products that unlock insights unleash AI value and powers data-driven s. Trusted by more than 700 of the world's leading enterprises Astronomer lets businesses do more with their data. To learn more visit www.astronomer.io. Your background may be unconventional as long as you have the essential qualifications we encourage you to apply. While having ""bonus"" qualifications makes for a strong candidate Astronomer values diverse experiences. Many of us at Astronomer haven't followed traditional career paths and we welcome it if yours hasn't either. About this role: The Astronomer Customer Reliability Engineering (CRE) team is responsible for the success of our customers' usage of our managed Airflow service. The CREs are responsible for operating monitoring and maintaining the platform to ensure availability predictability and reliable operations. As an infrastructure specialist within the team you will learn to become an expert on the reliability of Kubernetes and the underlying cloud infrastructure on all 3 public clouds (AWS Azure and GCP). Our CRE team ensures production environments are available predictable and reliable for our customers. You will create strong relationships with customers and help them achieve their reliability goals. When you learn a new piece of technology are you aiming not just for getting started but becoming the expert? Do you listen to the plumber when they tell you what was wrong with the pipes? Do you know how your router works? Are you the kind of person who takes an MIT Opencourseware course and actually finishes it? Then this role could be for you. This position includes a requirement to work from 9AM-3PM Eastern US Monday to Friday. Your remaining work time is flexible. What you get to do: Learn and build expertise across several software engineering disciplines including: + Kubernetes + Cloud engineering + Cloud networking Gain exposure to the big picture learn about product engineering customer relationship management and more. Spend up to 20% of your time on side projects that contribute to Astronomer’s overall success such as contributing to the open-source Airflow repository or developing Astronomer’s internal monitoring and alerting systems built on Airflow. Work on a modern sophisticated cloud-native product that customers use to connect to dozens of other systems. Gain depth and breadth of learning! Work directly with our customers’ data engineers system admins DevOps teams and management. Provide feedback from your experience that can shape the direction of Astronomer’s products Own the customer experience working directly with customers to prioritize and solve issues and meet SLAs. Participate remotely within a fully distributed team. Approximately 2-4 in-person events per year. Help maintain 24x7 coverage through a specified 6-hour pager period during your work day. Participate in paid on-call rotation for weekend coverage. What you bring to the role: Motivation to learn Commitment to excellence Problem-solving and troubleshooting abilities Willingness to identify and own problems through the full lifecycle from vague problem to delivered solution Excellent written and verbal communication for connecting with our customers over our ticketing system and through Zoom Demonstrable Linux familiarity 4 years of professional experience Experience with Kubernetes/Docker/Containers Experience with any major cloud provider (AWS GCP Azure) Bonus points if you have: Previous experience working directly with customers (internal or external) Experience with DevOps Contributions to open-source projects Experience with Splunk or Prometheus The salary for this role is $140000-$150000 depending on experience level along with an equity component. #LI-Remote At Astronomer we value diversity. We are an equal opportunity employer: we do not discriminate on the basis of race religion color national origin gender sexual orientation age marital status veteran status or disability status. Astronomer is a remote-first company."
4 hour(s) ago
View
Data Science Engineer I - US
Rackspace Technology
Remote United States
Job Summary: We are expanding our team of motivated technologists to build AI and ML solutions for our customer. Specifically looking for an ML Engineer who is passionate about helping customers build Data Science and AI/ML solutions at scale. Your insight and expertise will help our delivery teams build ML solutions and build solutions across Data Science Machine learning Generative AI databases security and automation. In addition you will work with mid-tier technologies that include application integration security and much more! This position is ideal for candidates with a strong foundation in machine learning principles data processing and software engineering. You will support the design development and deployment of ML models and pipelines as well as assist in ingesting and transforming data for machine learning use cases. Work Location: Remote ### Key Responsibilities: + Assist in developing training and validating machine learning models for real-world applications (e.g. classification prediction and recommendation systems). + Build and maintain data ingestion pipelines from structured and unstructured sources using Python and SQL-based tools + Perform data cleaning normalization and feature engineering to prepare high-quality datasets for ML training and evaluation. + Collaborate on ML projects such as outcome prediction systems image classification models and intelligent search interfaces. + Contribute to building interactive applications by integrating ML models into frontend/backend systems (e.g. React Django REST APIs). + Participate in MLOps workflows including model versioning basic deployment tasks and experiment tracking. + Document data flows ML experiments and application logic consistently. + Attend Agile meetings and collaborate with peers through code reviews and sprint activities. ### Required Qualifications: + Bachelor’s degree in Computer Science Data Science Statistics Engineering or a related field. + Experience in machine learning data engineering or software development roles (internships or academic projects acceptable). + Solid understanding of supervised learning classification and data preprocessing techniques. + Experience with data engineering concepts including SQL PostgreSQL and REST API integration + Basic knowledge of data ingestion and transformation concepts. + Proficiency in Python and common ML libraries (e.g. scikit-learn pandas NumPy TensorFlow or PyTorch). + Familiarity with full-stack or web-based ML applications (e.g. React Django or Android Studio projects). + Familiarity with SQL and data wrangling tools. + Experience with version control tools like Git. + Strong problem-solving skills and attention to detail. + Effective communication and documentation skills. + Enthusiasm for learning new tools and growing within a collaborative team environment ### Preferred Qualifications: + Exposure to cloud platforms such as AWS GCP or Azure. + Experience with pyton Spark Airflow or data pipeline frameworks. + Understanding of basic data architecture concepts (e.g. data lakes warehouses). + Participation in ML/DS projects hackathons or Kaggle competitions. ### Sponsorship + This role is not sponsorship eligible + Candidates need to be legally allowed to work in the US for any employer The following information is required by pay transparency legislation in the following states: CA CO HI NY and WA. This information applies only to individuals working in these states. The anticipated pay range for Colorado is: $ 69900 - $102520 The anticipated starting pay range for California New York City and Washington is: $ 81500 - 119460 Based on eligibility compensation for the role may include variable compensation in the form of bonus commissions or other discretionary payments. These discretionary payments are based on company and/or individual performance and may change at any time. Actual compensation is influenced by a wide array of factors including but not limited to skill set level of experience licenses and certifications and specific work location. Information on benefits offered is here. About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges designing solutions that scale building and managing those solutions and optimizing returns into the future. Named a best place to work year after year according to Fortune Forbes and Glassdoor we attract and develop world-class talent. Join us on our mission to embrace technology empower customers and deliver the future. More on Rackspace Technology Though we’re all different Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age color disability gender reassignment or identity or expression genetic information marital or civil partner status pregnancy or maternity status military or veteran status nationality ethnic or national origin race religion or belief sexual orientation or any legally protected characteristic. If you have a disability or special need that requires accommodation please let us know. #LI-RL1 #US-Remote
4 hour(s) ago
View
Staff Data Engineer
customerio
Americas Remote
About Customer.io Over 7500 companies — from scrappy startups to global brands — use our platform to send billions of emails push notifications in-app messages and SMS every day. Customer.io powers automated communication that people actually want to receive. We help teams send smarter more relevant messages using real-time behavioral data. Hi there! We’re looking for a Staff Data Engineer to take ownership of the data models and infrastructure that power analytics across our company. This is our first dedicated data engineering hire — a high-impact opportunity to shape our semantic layer streamline our modern data stack and unlock self-service analytics for the entire business. You’ll be the go-to person for building a high-trust governed data foundation that enables everyone (from executives to product teams) to make smarter faster decisions. What we value High agency high impact — you’re energized by building and improving not just maintaining the status quo Collaboration — you thrive on partnering with analysts product teams and data scientists to bring innovative ideas into reality Pragmatism — you balance speed with scalability building solutions that work today and scale over the long haul Continuous improvement — you’re always on the lookout for ways to make models cleaner pipelines more efficient and governance stronger Curiosity — you want to learn experiment and evolve the data stack as the company grows and our products evolve AI-ready data — you build pipelines and models that don’t just serve today’s reporting needs but are structured and governed to support machine learning and AI use cases tomorrow What you’ll do Own and evolve our semantic layer: design document and optimize dbt models that drive business KPIs and enable self-service analytics Administer and improve our data stack (Snowflake dbt Sigma Stitch Fivetran) — ensuring reliability scalability and best practices Partner with analytics and product teams to deliver trusted AI-ready data that supports both internal decision-making and new product features Lead improvements in efficiency and governance: streamline dbt runtimes refactor inefficient SQL reduce Snowflake costs and implement RBAC and documentation standards Evaluate and shape the future of our orchestration and data infrastructure including opportunities for front-end data apps (e.g. Streamlit) What we’re looking for 5–8 years of hands-on experience as a Data or Analytics Engineer in a fast-paced high-growth environment Must-have: advanced SQL dbt and Python — you’re comfortable writing complex queries building robust models and automating workflows Experience with Snowflake administration and optimization Familiarity with ETL tools (like Stitch and Fivetran) and modeling and orchestration frameworks (like dbt Cloud and Airflow) Experience with GCP nice to have Terraform experience a plus A track record of building and improving data systems — not just operating them Excited by the opportunity to join early take ownership and have a direct impact on the company’s growth Americas-based remote-friendly What success looks like in this role Within your first 3–6 months you will: Deliver clean documented dbt models that define core business metrics Ensure reliable pipelines from key systems into Snowflake Improve query performance and reduce data costs Establish basic governance and documentation standards Enable analysts to build trusted dashboards on top of consistent data Lay the groundwork for AI/ML by structuring data for future use cases About the Data Team We’re building the data foundation to power the next phase of our company’s growth. With a modern stack already in place (Snowflake dbt Sigma Fivetran/Stitch) we’re ready to take things to the next level: a governed documented and trusted semantic layer a world-class self-service analytics experience and a data platform that scales with our AI-first future.  Disclaimer The above job description is not an exhaustive list of activities responsibilities or requirements. Duties and responsibilities may change at any time with or without notice. Compensation & Benefits We believe in transparency. Starting salary for this role is $170000 (or equivalent in local currency) depending on experience and subject to market rate adjustment. We know our people are what make us great and we’re committed to taking great care of them. Our inclusive benefits package supports your well-being and growth including 100% coverage of medical dental vision mental health and supplemental insurance premiums for you and your family. We also offer 16 weeks paid parental leave unlimited PTO stipends for remote work and wellness a professional development budget and more. See full benefits here → Our Process No gotchas no trick questions - just a clear human process designed to help both of us make an informed decision. 30-minute call with Maria (Recruiter) 45-minute call with Hiring Manager 30-minute calls with 2 potential team members Take-Home Assignment + 45-minute review call All final candidates will be asked to complete a background check and employment verifications as part of our pre-employment process. Customer.io recognizes the stifling impact of systemic injustice on diverse communities. We commit to using our influence to increase inclusion and equity within the tech industry. We strive to build an inclusive team culture implement bias-free hiring practices and develop community partnerships to expand our global impact. Zoom is the only video conference platform that we use virtual interviews will be conducted using the video capability (i.e. not via the chat) and offers will be extended in writing on official Customer.io letterhead. Please be vigilant in all of your job search activity and if you have any questions please contact jobs@customer.io. Join us! Check out our careers page for more information about why you should come work with us! We believe in empathy transparency responsibility and yes a little awkwardness. If you’re excited by what you read — apply now.
4 hour(s) ago
View
AI/NLP Engineer
LEOTechnologies
Irvine, CA
At LeoTech we are passionate about building software that solves real-world problems in the Public Safety sector. Our software has been used to help the fight against continuing criminal enterprises drug trafficking organizations identifying financial fraud disrupting sex and human trafficking rings and focusing on mental health matters to name a few.As an AI/NLP Engineer on our Data Science team you will be at the forefront of leveraging Large Language Models (LLMs) and cutting-edge AI techniques to create transformative solutions for public safety and intelligence workflows. You will apply your expertise in LLMs Retrieval-Augmented Generation (RAG) semantic search Agentic AI GraphRAG and other advanced AI solutions to develop enhance and deploy robust features that enable real-time decision-making for our end users. You will work closely with product engineering and data science teams to translate real-world problems into scalable production-grade solutions.  This is an individual contributor (IC) role that emphasizes technical depth experimentation and hands-on engineering. You will participate in all phases of the AI solution lifecycle from architecture and design through prototyping implementation evaluation productionization and continuous improvement.  ➡ Core Responsibilities ➡ Design build and optimize AI-powered solutions using LLMs RAG pipelines semantic search GraphRAG and Agentic AI architectures.Implement and experiment with the latest advancements in large-scale language modeling including prompt engineering model fine-tuning evaluation and monitoring.Collaborate with product backend and data engineering teams to define requirements break down complex problems and deliver high-impact features aligned with business objectives.Inform robust data ingestion and retrieval pipelines that power real-time and batch AI applications using open-source and proprietary tools.Integrate external data sources (e.g. knowledge graphs internal databases third-party APIs) to enhance the context-awareness and capabilities of LLM-based workflows.Evaluate and implement best practices for prompt design model alignment safety and guardrails for responsible AI deployment.Stay on top of emerging AI research and contribute to internal knowledge-sharing tech talks and proof-of-concept projects.Author clean well-documented and testable code participate in peer code reviews and engineering design discussions.Proactively identify bottlenecks and propose solutions to improve system scalability efficiency and reliability. What We Value ➡ Bachelor's or Master's degree in Computer Science Artificial Intelligence Data Science or a related field. 5+ years of hands-on experience in applied AI NLP or ML engineering (with at least 2 years working directly with LLMs RAG semantic search and Agentic AI). Deep familiarity with LLMs (e.g. OpenAI Claude Gemini) prompt engineering and responsible deployment in production settings. Experience designing building and optimizing  RAG pipelines semantic search vector databases (e.g. ElasticSearch Pinecone) and Agentic or multi-agent AI workflows in in large scale production setup. Exposure to MCP and A2A protocol is a plus.Exposure to GraphRAG or graph-based knowledge retrieval techniques is a strong plus. Strong proficiency with modern ML frameworks and libraries (e.g. LangChain LlamaIndex PyTorch HuggingFace Transformers). Ability to design APIs and scalable backend services with hands-on experience in Python. Experience building deploying and monitoring AI/ML workloads in cloud environments (AWS Azure) using services like AWS SageMaker AWS Bedrock AzureAI etc. Experience with tools to load balance different LLMs providers is a plus.Familiarity with MLOps practices CI/CD for AI model monitoring data versioning and continuous integration. Demonstrated ability to work with large complex datasets perform data cleaning feature engineering and develop scalable data pipelines. Excellent problem-solving collaboration and communication skills able to work effectively across remote and distributed teams. Proven record of shipping robust high-impact AI solutions ideally in fast-paced or regulated environments.   Technologies We Use ➡ Cloud & AI Platforms: AWS (Bedrock SageMaker Lambda) AzureAI Pinecone ElasticCloud Imply Polaris. LLMs & NLP: HuggingFace OpenAI API LangChain LlamaIndex Cohere Anthropic. Backend: Python (primary) Elixir (other teams). Data Infrastructure: ElasticSearch Pinecone Weaviate Apache Kafka Airflow. Frontend: TypeScript React. DevOps & Automation: Terraform EKS GitHub Actions CodePipeline ArgoCD. Monitoring & Metrics: Grafana (metrics dashboards alerting) Langfuse (Agentic AI observability prompt management)Testing: Playwright for end-to-end test automation. Other Tools: Mix of open-source and proprietary frameworks tailored to complex real-world problems. What You Can Expect ➡ Enjoy great team camaraderie whether at our Irvine office or working remotely. Thrive on the fast pace and challenging problems to solve. Modern technologies and tools. Continuous learning environment. Opportunity to communicate and work with people of all technical levels in a team environment. Grow as you are given feedback and incorporate it into your work. Be part of a self-managing team that enjoys support and direction when required. 3 weeks of paid vacation – out the gate!! Competitive Salary. Generous medical dental and vision plans. Sick and paid holidays are offered. Work with talented and collaborative co-workers. ➡ LeoTech is an equal opportunity employer and does not discriminate on the basis of any legally protected status.
5 hour(s) ago
View

* unlock: sign-up for free / login and use the searches from your home page
** job listings updated in real time 🔥

Login & search by other job titles, a specific location or any keyword.
Powerful custom searches are available once you login.