Remote Grafana Jobs

2880 remote jobs*

Job Title Location Description Posted**
Data Systems Analyst 3
UC San Diego
San Diego, CA
Remote Payroll Title: DATA SYS ANL 3 Department: San Diego Supercomputer Center Hiring Pay Scale $88000 - $106400 / Year Worksite: Hybrid Appointment Type: Contract Appointment Percent: 100% Union: Uncovered Total Openings: 1 Work Schedule: Days 8 hrs/day Mon-Fri #136354 Data Systems Analyst 3 Extended Deadline: Thu 9/11/2025 UC San Diego values equity diversity and inclusion. If you are interested in being part of our team possess the needed licensure and certifications and feel that you have most of the qualifications and/or transferable skills for a job opening we strongly encourage you to apply. Job posting will remain open until a suitable candidate has been selected. This is a 100% Contract for 1 year with the possibility of extension or conversion to career status. DESCRIPTION = DEPARTMENT OVERVIEW: The Mission of the San Diego Supercomputer Center is to translate innovation into practice. SDSC adopts and partners on innovations in industry and academia in the areas of software hardware computational and data sciences and related areas and translates them into cyberinfrastructure that solves practical problems across any and all scientific domains and societal endeavors. Cyberinfrastructure refers to an accessible integrated network of high-performance computing data and networking resources and expertise focused on accelerating scientific inquiry and discovery. With more than 250 employees and $30-50M of revenue a year SDSC is a global leader in the design development and operations of cyberinfrastructure. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains from earth sciences and biology to astrophysics bioinformatics and health IT. SDSC presently operates multiple large HPC systems ranging from a 120k x86 CPU core general purpose system to a system explicitly designed for Artificial Intelligence and Machine Learning and a nationally distributed system open for all of academia to integrate with. SDSC offers research data services across the entire vertical stack from universally scalable storage to consulting services on FAIR Big Data and AI. SDSC offers a rich set of cloud services both on-premise in the commercial cloud and as hybrid services across both. SDSC has three geographic scopes a national scope supporting cyberinfrastructure for the entire US research and education community a California scope with a special focus on convergence research that addresses the three dominant threats to CA: Drought Fire Earthquakes and a campus scope focusing on advancing the global impact of SDSC by advancing the research objectives of the UC San Diego faculty researchers and students. SDSC impacts researchers at scales from 1000’s to Millions. SDSC annually trains thousands of researchers in cyberinfrastructure tools and software and supports thousands of individual researchers via Unix accounts on its large HPC systems. SDSC was a leader developing the Science Gateway concept and continues to be a global leader in its evolution. SDSC operates multiple major such gateways with user communities ranging from the tens of thousands to the millions. SDSC’s educational programs includes online courses that have been attended by more than a million students. SDSC is committed to democratizing access to cyberinfrastructure across all of its geographic scopes. SDSC strives towards a culture that supports our employees to be their best achieve their goals and enjoy their lives both professionally and personally. The Center for Applied Internet Data Analysis (CAIDA) is an independent analysis and research group based at the University of California's San Diego Supercomputer Center. CAIDA investigates both practical and theoretical aspects of the Internet with particular focus on topics that are macroscopic in nature and provide enhanced insight into the function of Internet infrastructure worldwide improve the integrity of the field of Internet science as well as the integrity of operational Internet measurement and management and inform science technology and communications public policies. POSITION OVERVIEW: Applies skills as a seasoned experienced data/information management professional with a full understanding of industry practices and campus / medical center / OP and department methodologies policies and procedures to resolve complex and wide-ranging issues where analyses of situations or data require a review of a variety of factors. Demonstrates competency in selecting methods and techniques to obtain solutions. Acts as an experienced professional for Internet measurements data collection and administration in support of the Center for Applied Internet Data Analysis (CAIDA) lab at the UC San Diego Supercomputer Center. The duties of this position include the use of specialized tools for Internet measurement data collection documentation analysis curation and dissemination and software development related to these activities. The position undertakes to capture curate and sharing of four types of security-relevant Internet measurement data (topology routing unsolicited traffic DNS) management of meta-data (e.g. geolocation and other annotations) and administering execution of new vetted Internet measurements experiments by researchers. Data collection and curation duties include implementing testing running and refining data collection software based on environment and research needs managing disk space and offloading collected data to distributed/cloud storage and to long-term storage facilities curating and indexing data including using various statistical disclosure controls for privacy protection. Will assist CAIDA's primary Systems Administrator with duties such as the configuration and remote maintenance of measurement systems with particular attention to maintaining infrastructure and data security and supporting a virtualized cloud research computing environment. Works with the system administrator to maintain 100GB passive monitoring platforms to capture two-way passive traces data. Maintain passive traces capturing anonymization post-processing curation and sharing protocols. Works to maintain two-way and Internet background radiation (unsolicited traffic) passive traces time-series database. Supports users running vetted measurements experiments on CAIDA infrastructure including but not limited to execution of ITDK Ark and BGPStream. This position interacts with numerous external collaborators including remote monitor site contacts researchers with whom we share our data collaborators with whom we share measurement infrastructure. This position also supports research programmers visiting researchers graduate students and postdoctoral fellows with whom the group collaborates on publications and research. For more information please visit: https://www.sdsc.edu/ QUALIFICATIONS Bachelor's degree in related area and / or equivalent experience / training. Familiarity with data model patterns in one or more common business or academic domains. Ability to read and interpret scientific papers to understand data use cases and advise end users on relevance to their own experiments. Ability to represent relevant information in abstract models. Demonstrated ability to design and generate reports using standard graphing (e.g. Grafana Python visualization libraries) and reporting tools spreadsheets pivot tables and statistical tools. Demonstrated experience with UNIX operating systems and shell scripting particularly Ubuntu. Demonstrated experience in using scripting languages (Perl Python GOlang) to process and extract data. Familiarity with JavaScript or JavaScript libraries for web authoring. Thorough knowledge of data management systems practices and standards. Demonstrated expertise in applying advanced computer technology as a problem-solving tool in data modeling analysis storage management and other administrative functions and solutions. SPECIAL CONDITIONS Job offer is contingent upon satisfactory clearance based on Background Check results. Pay Transparency Act Annual Full Pay Range: $88000 - $161600 (will be prorated if the appointment percentage is less than 100%) Hourly Equivalent: $42.15 - $77.39 Factors in determining the appropriate compensation for a role include experience skills knowledge abilities education licensure and certifications and other business and organizational needs. The Hiring Pay Scale referenced in the job posting is the budgeted salary or hourly range that the University reasonably expects to pay for this position. The Annual Full Pay Range may be broader than what the University anticipates to pay for this position based on internal equity budget and collective bargaining agreements (when applicable). If employed by the University of California you will be required to comply with our Policy on Vaccination Programs which may be amended or revised from time to time. Federal state or local public health directives may impose additional requirements. To foster the best possible working and learning environment UC San Diego strives to cultivate a rich and diverse environment inclusive and supportive of all students faculty staff and visitors. For more information please visit UC San Diego Principles of Community. The University of California is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race color religion sex sexual orientation gender identity national origin disability age protected veteran status or other protected status under state or federal law. For the University of California’s Anti-Discrimination Policy please visit: https://policy.ucop.edu/doc/1001004/Anti-Discrimination UC San Diego is a smoke and tobacco free environment. Please visit smokefree.ucsd.edu for more information. Job Details - Date Posted
48 min(s). ago
View
Database Administrator
LetsGetChecked
Miami, FL
LetsGetChecked is a global healthcare solutions company that provides the tools to manage health from home through health testing virtual care genetic sequencing and medication delivery for a wide range of health and wellness conditions. LetsGetChecked's end-to-end model includes manufacturing logistics lab analysis physician support and prescription fulfillment. Founded in 2015 and co-headquartered in Dublin and Atlanta LetsGetChecked empowers people to take control of their health and live longer happier lives. We're looking for a skilled Database Administrator whose primary focus will be on our SQL Server and PostgreSQL environments in AWS. You'll also gain significant exposure to our Amazon Redshift data warehouse clusters and other cutting-edge data technologies. If you're passionate about data eager to learn and want to advance your career in a dynamic environment this role is for you. ### Responsibilities Administer & Optimize: Take ownership of the tuning optimization and administration of our core database systems including MSSQL PostgreSQL (RDS/Aurora) and Redshift. Performance Monitoring: Proactively monitor the performance resource utilization and query throughput of our database systems using tools like DataDog Grafana CloudWatch and Splunk. Code & Troubleshoot: Identify and resolve performance bottlenecks in T-SQL and PL/pgSQL including stored procedures and work directly with our engineering teams to optimize code. Automate & Script: Develop and maintain automation scripts using Python or PowerShell for AWS Lambda functions and Octopus Deploy pipelines. Database Releases: Manage database releases across all development staging and production environments ensuring our high standards are consistently met. Maintain & Document: Keep our database documentation current and contribute to our knowledge base in Confluence. Support & Collaborate: Work efficiently through the JIRA ticket queue to handle database-related requests and collaborate with our talented team of engineers. ### What we are looking for… A BS in Computer Science or a related field or equivalent real-world experience. 3+ years of professional experience administering production databases with a strong focus on SQL Server and PostgreSQL in a cloud environment. Proven hands-on experience with AWS database services (RDS Aurora Redshift). Strong scripting skills in both T-SQL and PL/pgSQL. Solid experience with database automation using Python and/or PowerShell. A deep understanding of best practices for managing highly available 24/7 database environments. ### Preferred Experience Experience with other cloud platforms such as Microsoft Azure and/or GCP Cloud. Familiarity with NoSQL databases like MongoDB Atlas or other relational databases like MySQL. Experience with large-scale Cloud Migration projects. Active Database Certifications (e.g. AWS Certified Database – Specialty PostgreSQL Essentials/Advanced Certification Azure Database Administrator Associate). Experience using troubleshooting tools like wait statistics extended events or system views. Familiarity with CI/CD automation tools like Jenkins GitHub Actions or Octopus Deploy. Benefits: Alongside a salary of $135000 - $150000 (Depending on Experience) we offer a range of benefits including: Health dental & vision insurance 401k Matching contribution Employee Assistance Programme Annual Compensation Reviews Flexible PTO Policy and 3 paid volunteer days per year Free monthly LetsGetChecked tests as we are not only focused on the well being of our patients but also the well being of our teams A referral bonus programme to reward you for helping us hire the best talent Internal Opportunities and Careers Clinics to help you progress your career Maternity Paternity Parental and Wedding leave #LI-IF #LI-Remote Why LetsGetChecked At LetsGetChecked we are revolutionizing healthcare by making it more accessible convenient and personalized. Our mission is to empower individuals with the knowledge and tools they need to manage their health proactively so they can live longer happier lives. By joining our team you will be part of a dynamic and innovative company that is dedicated to improving lives through cutting-edge technology and compassionate care. We value our employees and invest in their growth offering opportunities for professional development and career advancement. Together we can make a meaningful impact on the future of healthcare and help people take control of their health journey. Join us in our commitment to transforming healthcare for the better. Our Commitment to Diversity Equity and Inclusion At LetsGetChecked we are committed to fostering an inclusive environment that celebrates diversity in all its forms. We believe that the diversity of thought background and experience strengthens our teams and drives innovation. We are an equal-opportunity employer and do not discriminate on the basis of race ethnicity religion color place of birth sex gender identity or expression sexual orientation age marital status military service status or disability status. Our goal is to ensure that everyone feels valued and empowered to thrive. To learn more about LetsGetChecked and our mission to help people live longer healthier lives please visit https://www.letsgetchecked.com/careers/
52 min(s). ago
View
Cloud Operations Engineer
Piper Companies
Remote
Piper Companies is seeking a Cloud Operations Engineer to join a Cybersecurity Services team in a Remote capacity. This is a specialized client-facing engineering role centered on designing and securing cloud architectures and automating solutions across AWS Azure and GCP platforms. Responsibilities of the Cloud Operations Engineer include: Collaborate with a dynamic engineering team to deliver innovative cloud and security solutions. Act as a cloud subject matter expert (SME) for clients guiding architecture design and deployment across AWS Azure and GCP using automation tools. Partner with Cloud Service Providers and enterprise clients to support security goals and drive modernization initiatives. Design and implement security tools and platforms such as Tenable Trend Micro AquaSec Microsoft Defender and Burp Suite. Develop cloud and security strategies including architecture roadmaps maturity assessments and planning activities. Build secure compliant cloud environments using Infrastructure-as-Code and contribute to documentation and security assessments. Qualifications for the Cloud Operations Engineer include: 3+ years of experience in systems engineering cloud architecture and automation across AWS Azure or GCP. Proven expertise in Infrastructure-as-Code using tools like Terraform and Ansible. Strong knowledge of cloud platforms and services with experience as a Cloud Architect DevOps Engineer or Security Engineer. Background in Agile environments collaborating within technical teams of 3+ members. Excellent communication documentation and problem-solving skills including technical diagrams and written reports. Bachelor’s degree in Information Technology or related field or equivalent combination of education and experience. Additional Qualifications: Experience supporting clients within a professional services or consulting environment. Proven ability to manage projects and track deliverables for both individual and team efforts. Hands-on experience automating workflows using GitLab/GitHub with Terraform and Ansible. Expertise in modern application architectures such as serverless and microservices. Familiarity with security standards and frameworks like CIS Benchmarks DISA STIG FedRAMP FISMA HIPAA HITRUST and PCI. Certifications such as CISSP CISM or CISA and experience implementing encryption technologies (SSL PKI). Compensation for the Cloud Operations Engineer includes: Salary Range: $115000 – $120000 Comprehensive Benefits: Medical Dental Vision 401K PTO Sick Leave if required by law and Holidays This job opens for applications on 08/25/2025. Applications will be accepted for at least 30 days from the posting date. Keywords: #LI-MR1 #LI-REMOTE Site Reliability Engineer cloud engineering cloud infrastructure cloud platforms cloud services hybrid cloud multi-cloud cloud-native cloud architecture cloud automation cloud security cloud operations cloud governance cloud compliance cloud monitoring cloud provisioning cloud deployment cloud integration cloud strategy cloud roadmap cloud maturity cloud transformation cloud migration cloud scalability cloud performance cloud configuration cloud tooling Infrastructure as Code IAC CI/CD pipelines GitLab GitHub serverless microservices Kubernetes Docker container orchestration observability logging Prometheus Grafana ELK stack Splunk cloud cost optimization Tenable Trend Micro AquaSec Microsoft Defender Burp Suite encryption CIS Benchmarks DISA STIG HIPAA HITRUST PCI-DSS FISMA NIST SIEM SOC 2 zero trust IAM RBAC MFA SAML OAuth Linux Windows Agile SCRUM technical documentation client-facing professional services cloud consulting
52 min(s). ago
View
DevOps Engineer
Tekserv
Remote
Job Location: Remote (Columbus OH) Duration: 6 Months (With Possible extension) Work Authorization: OPT H1B H4 EAD GC EAD GC USC Senior DevOps Engineer is an integral part of BBW Omni Channel DevOps team. I want someone who is well experienced in Azure APIM - creating API specs policies logs gateway management etc Job Description: As a Senior DevOps Engineer you will partner with client’s infrastructure development and security teams to define and implement CI/CD and Azure cloud DevOps requirements supporting various seamless commerce initiatives. This role will also support building automation and monitoring needs for our talented developers using a combination of technical proficiencies organizational aptitude and interpersonal skills. You must have a passion for enhancing internal technical team experiences be confident to propose pragmatic solutions to complex problems and be eager to learn from and educate others. Responsibilities: Design manage and maintain tools on internally hosted infrastructure that automate operational processes to improve development delivery. Partner with Agile Product teams to identify opportunities. Setup APM logs and traces on Datadog for various BBW selling channels like Site App and Stores Setup Gitlab CI/CD and integrations to Azure cloud environments Artifactory automation testing tools Datadog Hashicorp Vault Morpheus Harbor etc Partner with BBW IT Infrastructure to establish appropriate Morpheus automation frameworks in support of fully automating code deployments. Support client’s Azure APIM design requirements. Monitor Azure cloud deployments application performance. As needed administrate any tools or environments to support Digital Operations. Support code deployments and pipeline monitoring in all environments. Troubleshoot and resolve issues as necessary. Partner with client’s IT Security to understand implement and automate security controls governance processes and compliance validations. Work alongside Agile Product teams to anticipate the configuration needs and how those needs can affect the efficient release of each product. Assist other department engineers in creating practical demonstrations of proposed solutions and demonstrating them to other team members. Stay up to date on relevant technologies plug into user groups and understand trends and opportunities to ensure we are using the best possible techniques and tools. Work with developers and other DevOps engineers to test system integrity. Ensure website stability and operational integrity via participation in team-based on-call rotations and defect triage. Minimum Qualifications: 7+ years of hands-on experience working in an infrastructure or DevOps role that directly supports development community activities. Working experience with public cloud platforms preferably Azure - Azure Kubernetes Service (AKS) API Management Azure Functions Ingress Container Registry Azure Load Balancer. Software process automation with popular scripting languages (Python Groovy shell and / or PowerShell). Hands-on experience with Gitlab Jenkins Bitbucket Grunt.js GitHub. Experience with CI/CD pipelines using Gitlab Jenkins Git Artifactory Ansible Go and more. Experience configuring and managing databases - e.g. PostgreSQL MySQL and/or Mongo CouchDB. Hands-on experience with containerization technologies - e.g. Docker Kubernetes OpenShift. Familiarity with build tooling used for react.js node.js and salesforce commerce cloud applications. Experience with Morpheus Cloud Automation Terraform and writing Infrastructure as a Code (IAC). Technology Snapshot: Required: + Gitlab + Datadog – APM Log Tracing etc. + Azure – hands on + Gitlab CI CD + Shell scripting + Containerization and Orchestration Docker / Kubernetes etc + Python scripting + Azure APIM + GraphQL Gateway + Azure Functions + Azure Containers Preferred Qualifications: Bachelor's degree in computer science or equivalent Software Engineering experience. 3rd party digital e-commerce platform experience such as IBM Sterling OMS Radial OMS Salesforce Commerce Cloud (SFCC). Knowledge of best practices and IT operations in an always-up always-available mission-critical service. Experience with Agile Development SCRUM or Extreme Programming methodologies. Application performance analysis and monitoring. Related software development tools such as Jira Confluence Datadog Artifactory Npm Grafana Prometheus SonarQube. Strong technical project management and documentation skills. You can send your resume to vadkar@tekservcorp.com
53 min(s). ago
View
Database Administrator
Creed Infotech
Remote
Job Title: Open-Source Database Administrator (DBA) Remote Total Rate - $85 /hr. on W2 About the Role: We are seeking an experienced Open-Source Database Administrator (DBA) with strong expertise in Linux environments and deep knowledge of High Availability (HA) solutions around MariaDB and Postgres. The ideal candidate will be passionate about open-source technologies performance tuning and database reliability at scale. Experience with TiDB is highly desirable though not mandatory. Responsibilities: Design implement and support HA/DR solutions for MariaDB and Postgres environments. Manage database clusters replication backup/restore failover and monitoring. Optimize query performance troubleshoot bottlenecks and ensure database uptime. Work closely with engineering teams to support application integration and scalability. Implement automation for provisioning patching and upgrades in Linux-based environments. Maintain documentation best practices and security standards for database operations. Requirements: 5+ years of experience as a DBA with a focus on MariaDB/MySQL and Postgres. Strong hands-on skills in Linux system administration. Proven experience implementing HA/DR solutions (Galera Cluster Patroni Pacemaker/Corosync etc.). Familiarity with TiDB or other distributed SQL databases is a strong plus. Experience with monitoring tools (Prometheus Grafana Percona Monitoring etc.). Scripting knowledge (Bash Python or Ansible) for automation. Solid understanding of database security backup/recovery and performance tuning. Job Type: Contract Pay: $80.00 - $85.00 per hour Expected hours: 40 per week Work Location: Remote
54 min(s). ago
View
AI/NLP Engineer
LeoTech
Irvine, CA
At LeoTech we are passionate about building software that solves real-world problems in the Public Safety sector. Our software has been used to help the fight against continuing criminal enterprises drug trafficking organizations identifying financial fraud disrupting sex and human trafficking rings and focusing on mental health matters to name a few. As an AI/NLP Engineer on our Data Science team you will be at the forefront of leveraging Large Language Models (LLMs) and cutting-edge AI techniques to create transformative solutions for public safety and intelligence workflows. You will apply your expertise in LLMs Retrieval-Augmented Generation (RAG) semantic search Agentic AI GraphRAG and other advanced AI solutions to develop enhance and deploy robust features that enable real-time decision-making for our end users. You will work closely with product engineering and data science teams to translate real-world problems into scalable production-grade solutions. This is an individual contributor (IC) role that emphasizes technical depth experimentation and hands-on engineering. You will participate in all phases of the AI solution lifecycle from architecture and design through prototyping implementation evaluation productionization and continuous improvement. ### Core Responsibilities Design build and optimize AI-powered solutions using LLMs RAG pipelines semantic search GraphRAG and Agentic AI architectures. Implement and experiment with the latest advancements in large-scale language modeling including prompt engineering model fine-tuning evaluation and monitoring. Collaborate with product backend and data engineering teams to define requirements break down complex problems and deliver high-impact features aligned with business objectives. Inform robust data ingestion and retrieval pipelines that power real-time and batch AI applications using open-source and proprietary tools. Integrate external data sources (e.g. knowledge graphs internal databases third-party APIs) to enhance the context-awareness and capabilities of LLM-based workflows. Evaluate and implement best practices for prompt design model alignment safety and guardrails for responsible AI deployment. Stay on top of emerging AI research and contribute to internal knowledge-sharing tech talks and proof-of-concept projects. Author clean well-documented and testable code participate in peer code reviews and engineering design discussions. Proactively identify bottlenecks and propose solutions to improve system scalability efficiency and reliability. ### What We Value Bachelor's or Master's degree in Computer Science Artificial Intelligence Data Science or a related field. 5+ years of hands-on experience in applied AI NLP or ML engineering (with at least 2 years working directly with LLMs RAG semantic search and Agentic AI). Deep familiarity with LLMs (e.g. OpenAI Claude Gemini) prompt engineering and responsible deployment in production settings. Experience designing building and optimizing RAG pipelines semantic search vector databases (e.g. ElasticSearch Pinecone) and Agentic or multi-agent AI workflows in in large scale production setup. Exposure to MCP and A2A protocol is a plus. Exposure to GraphRAG or graph-based knowledge retrieval techniques is a strong plus. Strong proficiency with modern ML frameworks and libraries (e.g. LangChain LlamaIndex PyTorch HuggingFace Transformers). Ability to design APIs and scalable backend services with hands-on experience in Python. Experience building deploying and monitoring AI/ML workloads in cloud environments (AWS Azure) using services like AWS SageMaker AWS Bedrock AzureAI etc. Experience with tools to load balance different LLMs providers is a plus. Familiarity with MLOps practices CI/CD for AI model monitoring data versioning and continuous integration. Demonstrated ability to work with large complex datasets perform data cleaning feature engineering and develop scalable data pipelines. Excellent problem-solving collaboration and communication skills able to work effectively across remote and distributed teams. Proven record of shipping robust high-impact AI solutions ideally in fast-paced or regulated environments. ### Technologies We Use Cloud & AI Platforms: AWS (Bedrock SageMaker Lambda) AzureAI Pinecone ElasticCloud Imply Polaris. LLMs & NLP: HuggingFace OpenAI API LangChain LlamaIndex Cohere Anthropic. Backend: Python (primary) Elixir (other teams). Data Infrastructure: ElasticSearch Pinecone Weaviate Apache Kafka Airflow. Frontend: TypeScript React. DevOps & Automation: Terraform EKS GitHub Actions CodePipeline ArgoCD. Monitoring & Metrics: Grafana (metrics dashboards alerting) Langfuse (Agentic AI observability prompt management) Testing: Playwright for end-to-end test automation. Other Tools: Mix of open-source and proprietary frameworks tailored to complex real-world problems. ### What You Can Expect Enjoy great team camaraderie whether at our Irvine office or working remotely. Thrive on the fast pace and challenging problems to solve. Modern technologies and tools. Continuous learning environment. Opportunity to communicate and work with people of all technical levels in a team environment. Grow as you are given feedback and incorporate it into your work. Be part of a self-managing team that enjoys support and direction when required. 3 weeks of paid vacation – out the gate!! Competitive Salary. Generous medical dental and vision plans. Sick and paid holidays are offered. Work with talented and collaborative co-workers. LeoTech is an equal opportunity employer and does not discriminate on the basis of any legally protected status.
56 min(s). ago
View
JAVA Full Stack Developer/Angular v.17, v.18 , or v.19- DC Metro Area
General Dynamics Information Technology
Falls Church, VA
Type of Requisition: RegularClearance Level Must Currently Possess: NoneClearance Level Must Be Able to Obtain: ConfidentialPublic Trust/Other Required: OtherJob Family: Software EngineeringJob Qualifications: Skills: DevOps Software Development Software ProjectsCertifications: NoneExperience: 5 + years of related experienceUS Citizenship Required: NoJob Description: SOFTWARE DEVELOPER SENIOR Transform technology into opportunity as a Software Developer Senior at GDIT. Shape what’s next for mission-critical government projects while shaping what’s next for your engineering career. MEANINGFUL WORK AND PERSONAL IMPACT As a Software Developer Senior the work you’ll do at GDIT will be impactful to the mission. You will play a crucial role as a Software Developer responsible for meeting customer commitments collaborating with your team members and delivering code on schedule. WHAT YOU’LL NEED TO SUCCEED Bring your engineering expertise along with a drive for innovation to GDIT. The Software Developer Senior must have: Education: Related Bachelors Degree Experience: 3+ years of related experience Technical skills: Required Technical Skills: 5+ years of Java/J2EE (Java 19 or higher preferred) work experience - Angular 17/18/19 (Latest version is preferred but 17 works too) - 2+ years of Oracle 19 (SQL and PL/SQL) experience – complex SQL queries - 2+ years of AWS experience with ECS (Building docker images and deploying them in cluster) ECR (Docker) - Terraforms and Github pipeline setups - OpenTelemetry - Grafana - Prometheus DynamoDb Building and coding applications and/or models using Spring/SpringBoot framework Node /NPM - Gradle Java Rest API’s /Microservices Spring Boot - Ability to communicate and troubleshoot with a team that works 100% remotely Preferred Skills: - Gitlab Runner Docker - Junit SONAR CloudBees Jersey - REST Web Service Security Clearance level: Current government public trust is preferred US Citizenship: Required US Citizenship and / or US Green Card Holder with 2 years residency. Role Requirements: Working in a DevOps environment with a 100% remote Agile team GDIT IS YOUR PLACE At GDIT the mission is our purpose and our people are at the center of everything we do. Growth: AI-powered career tool that identifies career steps and learning opportunities Support: An internal mobility team focused on helping you achieve your career goals Rewards: Comprehensive benefits and wellness packages 401K with company match and competitive pay and paid time off Flexibility: Full-flex work week to own your priorities at work and at home Community: Award-winning culture of innovation and a military-friendly workplace OWN YOUR OPPORTUNITY Explore a career in software development at GDIT and you’ll find endless opportunities to grow alongside colleagues who share your dedication to advancing innovation. The likely salary range for this position is $106250 - $143750. This is not however a guarantee of compensation or salary. Rather salary will be set based on experience geographic location and possibly contractual requirements and could fall outside of this range.Scheduled Weekly Hours: 40Travel Required: NoneTelecommuting Options: RemoteWork Location: USA VA Falls ChurchAdditional Work Locations: Total Rewards at GDIT: Our benefits package for all US-based employees includes a variety of medical plan options some with Health Savings Accounts dental plan options a vision plan and a 401(k) plan offering the ability to contribute both pre and post-tax dollars up to the IRS annual limits and receive a company match. To encourage work/life balance GDIT offers employees full flex work weeks where possible and a variety of paid time off plans including vacation sick and personal time holidays paid parental military bereavement and jury duty leave. To ensure our employees are able to protect their income other offerings such as short and long-term disability benefits life accidental death and dismemberment personal accident critical illness and business travel and accident insurance are provided or available. We regularly review our Total Rewards package to ensure our offerings are competitive and reflect what our employees have told us they value most. We are GDIT. A global technology and professional services company that delivers consulting technology and mission services to every major agency across the U.S. government defense and intelligence community. Our 30000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide offering leading capabilities in digital modernization AI/ML Cloud Cyber and application development. Together with our clients we strive to create a safer smarter world by harnessing the power of deep expertise and advanced technology. Join our Talent Community to stay up to date on our career opportunities and events at gdit.com/tc. Equal Opportunity Employer / Individuals with Disabilities / Protected Veterans
57 min(s). ago
View
Senior DevOps Engineer
CommandLink
Remote
About CommandLink CommandLink is a global SaaS Platform providing network voice services and IT security solutions helping corporations consolidate their core infrastructure into a single vendor and layering on a proprietary single pane of glass platform. CommandLink has revolutionized the IT industry by tackling the problems our competitors create. In recognition for our unprecedented innovation and dedication CommandLink was recognized as the SD-WAN Product of the Year ITSM Visionary Spotlight UCaaS Product of the Year NaaS Product of the Year Supplier of the Year and the AT&T Strategic Growth Partner. CommandLink has built the only IT platform for scale that solves ISP vendor sprawl and IT headaches. We make it easy for our customers to get more done maximize uptime and improve the bottom line. Learn more about us here! This is a remote position open to candidates residing in the following states: Alabama Arizona Arkansas Colorado Florida Georgia Indiana Kansas Kentucky Louisiana Maryland Michigan Mississippi Missouri Nevada New Hampshire North Carolina Ohio Oklahoma South Carolina Tennessee Texas Utah Virginia Wisconsin About your new role: We’re looking for a Senior DevOps Engineer to be the founding engineer building out the DevOps discipline across our software engineering organization. This is a rare opportunity to shape the future of how we design deploy and scale our platform from the ground up. As the first dedicated DevOps hire you’ll define standards build systems and set the culture for automation scalability and reliability across our global SaaS platform. You’ll work directly with engineering leadership and cross-functional teams to architect infrastructure improve CI/CD pipelines and establish world-class practices in observability security and performance. If you’re excited by the idea of building a DevOps function from scratch in a high-growth environment—and leaving a lasting mark on a platform that powers thousands of customers—you’ll thrive here. Key Responsibilities: Build & Scale Infrastructure: Architect and manage highly available secure and scalable systems across cloud environments (Azure AWS or GCP). CI/CD Ownership: Design and implement robust CI/CD pipelines that accelerate developer productivity and enable rapid safe deployments. Automation & Tooling: Drive infrastructure-as-code adoption (Terraform Ansible etc.) and automate repetitive processes across environments. Monitoring & Reliability: Define observability practices (metrics logging tracing) and proactively improve uptime performance and recovery. Security & Compliance: Partner with security and engineering teams to enforce best practices for identity access secrets management and compliance. Mentorship & Culture: Be the evangelist for DevOps culture—mentoring engineers influencing architectural decisions and embedding DevOps principles across the org. What you'll need for success: 7+ years of experience in DevOps SRE or Infrastructure Engineering roles. Proven track record building and scaling cloud-native environments (Azure preferred AWS/GCP also valuable). Expertise with infrastructure-as-code (Terraform Ansible Pulumi or similar). Strong background in CI/CD tooling (GitHub Actions Azure DevOps Jenkins CircleCI etc.). Solid understanding of containers and orchestration (Docker Kubernetes). Experience implementing observability stacks (Prometheus Grafana ELK OpenTelemetry etc.). A “founder’s mindset”: you thrive in ambiguity are highly autonomous and enjoy creating structure and process where none exists. Why you'll love life at CommandLink Join us at CommandLink where you'll have the opportunity to shape the future of business communication. We value the innovative spirit and seek individuals ready to bring their unique vision and expertise to a team that values bold ideas and strategic thinking. Are you ready to make an impact? Room to grow at a high-growth company An environment that celebrates ideas and innovation Your work will have a tangible impact Generous Medical Dental and Vision coverage for full-time employees Flexible time off 401k to help you save for the future Fun events at cool locations Free DoorDash lunches on Fridays Employee referral bonuses to encourage the addition of great new people to the team Commandlink hires individuals in a number of geographic regions and the pay ranges listed reflect the cost of labor across these regions. The base pay for this position as displayed at the bottom of the job description is a range based on our lowest geographic region up to our highest geographic region. Pay is based on location among other factors such as skill-set experience and qualifications held. The pay range for this role is: 120000 - 150000 USD per year(Remote (United States))
57 min(s). ago
View
Sr. Full Stack Developer / AWS
The Credit Pros
Remote Brazil
Role Summary We are looking for a Senior Full Stack Developer with advanced AWS DevOps skills to design develop and optimize modern web applications and cloud infrastructure. This role requires hands-on coding across backend and frontend stacks while also leading DevOps practices such as automation CI/CD observability cost optimization and cloud-native security. The ideal candidate is a builder and problem solver who ensures that applications are not only functional and performant but also scalable secure and resilient in AWS environments. ⸻ Key Responsibilities Full Stack Development Design requirements using AI Models LLM and new technologies with PR Best Practices and Q&A Automation Design and develop backend services using Laravel Node.js or Python frameworks (Django/FastAPI). Build modern frontend apps using React Next.js or Angular with TypeScript. Implement REST/GraphQL APIs data pipelines and microservices architecture. Optimize databases (PostgreSQL MySQL MongoDB) and integrate with analytics tools (Snowflake PostHog Rudderstack). AWS DevOps & Cloud Infrastructure Architect and deploy applications on AWS (Lambda ECS/EKS S3 CloudFront RDS DynamoDB). Build and maintain CI/CD pipelines (GitHub Actions GitLab CI AWS CodePipeline). Use Infrastructure as Code (Terraform CloudFormation CDK) for repeatable deployments. Manage containerization and orchestration with Docker + Kubernetes/EKS. Implement observability solutions (CloudWatch Grafana Prometheus OpenTelemetry). Drive cloud cost optimization with right-sizing auto-scaling and reserved instances. Security & Compliance Apply best practices in IAM VPC encryption and secrets management. Ensure compliance with SOC 2 PCI DSS HIPAA (if applicable). Conduct cloud security reviews vulnerability scans and patch automation. Collaboration & Leadership Work closely with DevOps QA and development teams to ensure high-quality delivery. Mentor junior engineers on cloud-native design and DevOps practices. Partner with product and IT leadership to align solutions with business needs. ⸻ ️ Required Skills & Qualifications Programming & Frameworks: Backend: Laravel Node.js Express.js NestJS or Django. Frontend: React Next.js Angular TypeScript TailwindCSS. Cloud & DevOps: AWS (ECS/EKS Lambda IAM VPC S3 RDS DynamoDB CloudFront). CI/CD automation with GitHub Actions GitLab CI Jenkins or AWS CodePipeline. Infrastructure as Code: Terraform AWS CDK CloudFormation. Containerization: Docker Kubernetes (EKS preferred). Databases & Data Tools: PostgreSQL MySQL MongoDB strong SQL optimization. Data pipeline/analytics experience with Snowflake Rudderstack or DBT. Monitoring & Observability: CloudWatch Grafana Prometheus ELK Stack OpenTelemetry. Security & Networking: VPC design IAM roles KMS encryption network ACLs WAF. Familiarity with compliance frameworks (SOC 2 PCI DSS). Soft Skills: Strong problem-solving and automation-first mindset. Effective communicator with ability to collaborate across Dev QA and Ops. Proactive self-driven and focused on continuous improvement. ⸻ Desired Certifications AWS Certified Solutions Architect or at least the Practitioner (Associate/Professional) AWS Certified DevOps Engineer – Professional Kubernetes CKA/CKAD HashiCorp Certified: Terraform Associate Security CompTIA Security+ AWS Security Specialty Development (nice-to-have) React Developer Certification Laravel Certification Requirements Bring your own device 100% Remote Productivity and Security platforms to be installed on device
3 hour(s) ago
View
DevOps Engineer
Tekserv
Remote United States
Job Location: Remote (Columbus OH) Duration: 6 Months (With Possible extension) Work Authorization: OPT H1B H4 EAD GC EAD GC USC Senior DevOps Engineer is an integral part of BBW Omni Channel DevOps team. I want someone who is well experienced in Azure APIM - creating API specs policies logs gateway management etc Job Description: As a Senior DevOps Engineer you will partner with client’s infrastructure development and security teams to define and implement CI/CD and Azure cloud DevOps requirements supporting various seamless commerce initiatives. This role will also support building automation and monitoring needs for our talented developers using a combination of technical proficiencies organizational aptitude and interpersonal skills. You must have a passion for enhancing internal technical team experiences be confident to propose pragmatic solutions to complex problems and be eager to learn from and educate others. Responsibilities: Design manage and maintain tools on internally hosted infrastructure that automate operational processes to improve development delivery. Partner with Agile Product teams to identify opportunities. Setup APM logs and traces on Datadog for various BBW selling channels like Site App and Stores Setup Gitlab CI/CD and integrations to Azure cloud environments Artifactory automation testing tools Datadog Hashicorp Vault Morpheus Harbor etc Partner with BBW IT Infrastructure to establish appropriate Morpheus automation frameworks in support of fully automating code deployments. Support client’s Azure APIM design requirements. Monitor Azure cloud deployments application performance. As needed administrate any tools or environments to support Digital Operations. Support code deployments and pipeline monitoring in all environments. Troubleshoot and resolve issues as necessary. Partner with client’s IT Security to understand implement and automate security controls governance processes and compliance validations. Work alongside Agile Product teams to anticipate the configuration needs and how those needs can affect the efficient release of each product. Assist other department engineers in creating practical demonstrations of proposed solutions and demonstrating them to other team members. Stay up to date on relevant technologies plug into user groups and understand trends and opportunities to ensure we are using the best possible techniques and tools. Work with developers and other DevOps engineers to test system integrity. Ensure website stability and operational integrity via participation in team-based on-call rotations and defect triage. Minimum Qualifications: 7+ years of hands-on experience working in an infrastructure or DevOps role that directly supports development community activities. Working experience with public cloud platforms preferably Azure - Azure Kubernetes Service (AKS) API Management Azure Functions Ingress Container Registry Azure Load Balancer. Software process automation with popular scripting languages (Python Groovy shell and / or PowerShell). Hands-on experience with Gitlab Jenkins Bitbucket Grunt.js GitHub. Experience with CI/CD pipelines using Gitlab Jenkins Git Artifactory Ansible Go and more. Experience configuring and managing databases - e.g. PostgreSQL MySQL and/or Mongo CouchDB. Hands-on experience with containerization technologies - e.g. Docker Kubernetes OpenShift. Familiarity with build tooling used for react.js node.js and salesforce commerce cloud applications. Experience with Morpheus Cloud Automation Terraform and writing Infrastructure as a Code (IAC). Technology Snapshot: Required: + Gitlab + Datadog – APM Log Tracing etc. + Azure – hands on + Gitlab CI CD + Shell scripting + Containerization and Orchestration Docker / Kubernetes etc + Python scripting + Azure APIM + GraphQL Gateway + Azure Functions + Azure Containers Preferred Qualifications: Bachelor's degree in computer science or equivalent Software Engineering experience. 3rd party digital e-commerce platform experience such as IBM Sterling OMS Radial OMS Salesforce Commerce Cloud (SFCC). Knowledge of best practices and IT operations in an always-up always-available mission-critical service. Experience with Agile Development SCRUM or Extreme Programming methodologies. Application performance analysis and monitoring. Related software development tools such as Jira Confluence Datadog Artifactory Npm Grafana Prometheus SonarQube. Strong technical project management and documentation skills. You can send your resume to vadkar@tekservcorp.com
4 hour(s) ago
View

* unlock: sign-up for free / login and use the searches from your home page
** job listings updated in real time 🔥

Login & search by other job titles, a specific location or any keyword.
Powerful custom searches are available once you login.