Job Title | Location | Description | Posted** |
---|---|---|---|
Solutions Architect, Migrations
Snowflake |
Colorado, United States
|
Where Data Does More. Join the Snowflake team. Snowflake Professional Services is the implementation and strategy experts for the Snowflake platform. We engage with customers at all phases of the customer journey to optimize accelerate and achieve business outcomes. Our teams of Solutions Architects demonstrate technical leadership everyday by writing code constructing solution architectures and guiding customers through decision paths to achieve impactful outcomes. We advise everyone from developers through senior executives on how best to utilize Snowflake to make their business successful. Our Solution Architects are comfortable with change and live on the cutting edge of technology. In this role you will lead and execute complex migrations from legacy systems to the Snowflake platform. You will collaborate closely with customers and system integrators to understand their requirements design robust migration strategies and ensure seamless transitions with minimal disruption. Additionally you will work with a diverse range of teams both inside and outside the company. This role is 100% remote and can be based anywhere in the United States. AS A SOLUTIONS ARCHITECT MIGRATIONS YOU WILL: Design comprehensive migration plans and solutions tailored to the customer's needs and architecture ensuring a seamless transition to Snowflake Partner with our customers acting as the technical expert for all aspects of Snowflake during implementation Deploy Snowflake following best practices including ensuring knowledge transfer so that customers are properly enabled and are able to extend the capabilities of Snowflake on their own Maintain deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them Work with System Integrator consultants at a deep technical level to successfully position and deploy Snowflake in customer environments Create presentations and document learnings from the project. Develop utilities and accelerators to streamline migration and related activities Collaborate with Product Management Engineering and Marketing to continuously improve Snowflake’s products and marketing OUR IDEAL SOLUTIONS ARCHITECT MIGRATIONS WILL HAVE: Minimum 5 years of experience as a solutions architect data architect database administrator or data engineer Experience migrating from one data platform to another and holistically addressing the unique challenges of migrating to a new platform Hands-on experience working with data warehouse systems including or similar to Snowflake BigQuery Teradata Redshift Netezza SAP HANA Greenplum Exadata etc. End-to-end experience migrating on prem data warehouse solutions to the cloud (AWS Azure GCP) Outstanding skills presenting to both technical and executive audiences whether impromptu on a whiteboard or using presentations and demos Understanding of complete data analytics stack and workflow from ETL to data platform design to BI and analytics tools Extensive hands-on expertise with SQL and SQL analytics and scripting languages like Python Bash Shell etc. University degree in computer science engineering mathematics or related fields or equivalent experience Additional Preferred Qualifications 3 years of technical consulting experience Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop HBase) Familiarity and experience with common BI and data exploration tools (e.g. Power BI Microstrategy Business Objects Tableau) Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS Microsoft Azure GCP OpenStack etc.) Experience implementing ETL pipelines using custom packaged tools and software development experience using C/C++ Java etc. SnowPro Core Certification We accept applications for this role on an ongoing basis. Every Snowflake employee is expected to follow the company’s confidentiality and security standards for handling sensitive data. Snowflake employees must abide by the company’s data security plan as an essential part of their duties. It is every employee's duty to keep customer information secure and confidential. Snowflake is growing fast and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values challenge ordinary thinking and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com The following represents the expected range of compensation for this role: The estimated base salary range for this role is $128000 - $178500. Additionally this role is eligible to participate in Snowflake’s bonus and equity plan. The successful candidate’s starting salary will be determined based on permissible non-discriminatory factors such as skills experience and geographic location. This role is also eligible for a competitive benefits package that includes: medical dental vision life and disability insurance 401(k) retirement plan flexible spending & health savings account at least 12 paid holidays paid time off parental leave employee assistance program and other company benefits.
|
|
Solutions Architect, Migrations
Snowflake |
Texas, United States
|
Where Data Does More. Join the Snowflake team. Snowflake Professional Services is the implementation and strategy experts for the Snowflake platform. We engage with customers at all phases of the customer journey to optimize accelerate and achieve business outcomes. Our teams of Solutions Architects demonstrate technical leadership everyday by writing code constructing solution architectures and guiding customers through decision paths to achieve impactful outcomes. We advise everyone from developers through senior executives on how best to utilize Snowflake to make their business successful. Our Solution Architects are comfortable with change and live on the cutting edge of technology. In this role you will lead and execute complex migrations from legacy systems to the Snowflake platform. You will collaborate closely with customers and system integrators to understand their requirements design robust migration strategies and ensure seamless transitions with minimal disruption. Additionally you will work with a diverse range of teams both inside and outside the company. This role is 100% remote and can be based anywhere in the United States. AS A SOLUTIONS ARCHITECT MIGRATIONS YOU WILL: Design comprehensive migration plans and solutions tailored to the customer's needs and architecture ensuring a seamless transition to Snowflake Partner with our customers acting as the technical expert for all aspects of Snowflake during implementation Deploy Snowflake following best practices including ensuring knowledge transfer so that customers are properly enabled and are able to extend the capabilities of Snowflake on their own Maintain deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them Work with System Integrator consultants at a deep technical level to successfully position and deploy Snowflake in customer environments Create presentations and document learnings from the project. Develop utilities and accelerators to streamline migration and related activities Collaborate with Product Management Engineering and Marketing to continuously improve Snowflake’s products and marketing OUR IDEAL SOLUTIONS ARCHITECT MIGRATIONS WILL HAVE: Minimum 5 years of experience as a solutions architect data architect database administrator or data engineer Experience migrating from one data platform to another and holistically addressing the unique challenges of migrating to a new platform Hands-on experience working with data warehouse systems including or similar to Snowflake BigQuery Teradata Redshift Netezza SAP HANA Greenplum Exadata etc. End-to-end experience migrating on prem data warehouse solutions to the cloud (AWS Azure GCP) Outstanding skills presenting to both technical and executive audiences whether impromptu on a whiteboard or using presentations and demos Understanding of complete data analytics stack and workflow from ETL to data platform design to BI and analytics tools Extensive hands-on expertise with SQL and SQL analytics and scripting languages like Python Bash Shell etc. University degree in computer science engineering mathematics or related fields or equivalent experience Additional Preferred Qualifications 3 years of technical consulting experience Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop HBase) Familiarity and experience with common BI and data exploration tools (e.g. Power BI Microstrategy Business Objects Tableau) Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS Microsoft Azure GCP OpenStack etc.) Experience implementing ETL pipelines using custom packaged tools and software development experience using C/C++ Java etc. SnowPro Core Certification We accept applications for this role on an ongoing basis. Every Snowflake employee is expected to follow the company’s confidentiality and security standards for handling sensitive data. Snowflake employees must abide by the company’s data security plan as an essential part of their duties. It is every employee's duty to keep customer information secure and confidential. Snowflake is growing fast and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values challenge ordinary thinking and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com The following represents the expected range of compensation for this role: The estimated base salary range for this role is $128000 - $178500. Additionally this role is eligible to participate in Snowflake’s bonus and equity plan. The successful candidate’s starting salary will be determined based on permissible non-discriminatory factors such as skills experience and geographic location. This role is also eligible for a competitive benefits package that includes: medical dental vision life and disability insurance 401(k) retirement plan flexible spending & health savings account at least 12 paid holidays paid time off parental leave employee assistance program and other company benefits.
|
|
Data Scientist
Appriss Retail |
|
About Appriss Retail Appriss Retail provides real-time decisions and active risk monitoring to enable our customers to maximize profitability while managing risk. Our solutions are continually adapting to changing market conditions. We bring 20+ years of retail data science expertise and experience. We serve a global base of leading commerce partners representing 1/3 of all US omnichannel retail sales activity across 150000 retail locations across specialty apparel department store hard goods big box grocery pharmacy and hospitality businesses in 45 countries on six continents. The company provides compelling relevant and profitable collective intelligence to operations finance marketing and loss prevention. Appriss Retail’s performance-improvement solutions yield measurable results with significant return on investment. About The Role The Data Scientist will play a hands-on role in building maintaining and enhancing our production-level data infrastructure. This position is ideal for someone with strong Python and SQL skills who can contribute directly to our code base—especially for data engineering pipelines and MLOps-related functionality. You’ll be expected to write clean maintainable and production-ready code with a focus on scaling our data workflows and supporting deployment in cloud environments. While this role may involve statistical and machine learning work the primary emphasis is on engineering-quality coding pipeline development and operationalizing models and data products. A successful candidate combines strong coding skills with deep curiosity about the data they are working with and the surrounding business context. This temporary role is expected to last about six months with the possibility of extension based on business needs. What You'll Do Write maintain and optimize production-level Python and SQL code for data pipelines MLOps workflows and related systems. Analyze structured and unstructured datasets to identify trends patterns and opportunities for improvement. Design implement and maintain automated data ingestion transformation and validation pipelines. Contribute to the design testing and deployment of predictive and prescriptive models. Support deployment of pipelines and ML models including standing up and managing relevant cloud infrastructure. Collaborate with engineering product and business teams to translate requirements into scalable code-driven solutions. Apply rigorous statistical and software engineering best practices to ensure accuracy reproducibility and reliability. Continuously evaluate and integrate tools frameworks and methods that improve efficiency scalability and maintainability. Communicate results and recommendations clearly to both technical and non-technical audiences. Adhere to data governance security and privacy standards. Qualifications Master’s Degree in Computer Science Data Science Statistics Mathematics or related field (Bachelor’s degree with significant relevant experience considered). Proven track record of writing production-ready Python and SQL code. Familiarity with common data and ML libraries (e.g. dbt pandas NumPy scikit-learn). Strong SQL skills and experience with large complex datasets. Experience in end-to-end data project delivery—from code development to deployment. Familiarity with version control (Git) and collaborative coding workflows. Strong understanding of software engineering principles in a data science context. Experience with statistical modeling machine learning and A/B testing. Ability to communicate technical concepts clearly and effectively. Commitment to producing high-quality maintainable and scalable code. Preferred Qualifications Experience with cloud platforms (e.g. Azure AWS GCP) and deploying data pipelines in the cloud. Familiarity with MPP platforms (e.g. Snowflake Databricks Greenplum). Experience with MLOps tools CI/CD workflows and infrastructure as code. History of standing up or managing cloud infrastructure to support data pipelines and ML deployment. Benefits At Appriss Retail we offer a competitive and comprehensive benefits package designed to support your well-being at work and beyond. Benefits begin on your first day and include multiple medical plan options dental and vision coverage health savings and flexible spending accounts paid parental leave and supplemental coverage for life’s unexpected moments. We offer generous paid time off a 401(k) with immediate vesting and company match short- and long-term disability and free access to health and wellbeing resources such as Calm and Rocket Lawyer. You’ll also have access to learning and development opportunities to help you grow your career. Our benefits support your well-being so you can perform your best in every part of life. The Pay Range For This Role Is 115000 - 125000 USD per year(Remote (United States))
|
|
Data Engineer - Databricks (Mid Level) - US Citizens/US Green Card Holders with 3 years
Infobahn Solutions Inc. |
Remote
|
Project requirements mandate role open only for US Citizens/US Green Card Holders with a minimum of 3 years on Green Card . IRS MBI Clearance a plus/ Active Secret or Top Secret a Plus. All candidates will have to go through Clearance process before being able to start on the project.(No exceptions to this requirement) Job Description Infobahn Solutions is hiring Databricks Data Engineering professionals in the Washington DC Metro Area for a US Government Federal Project with the Department of Treasury . The Data Engineers will be part of a Data Migration & Conversion Team on a large DataLake being implemented on AWS Gov Cloud . Data will be migrated from on premise Main Frame /Legacy database systems using Informatica PowerCenter to the AWS Landing Zone on S3. Further conversion will be done using Databricks (PySpark) in AWS. The Data Engineer should have prior Data Migration experience and understand all the intricacies required of developing data integration routines for moving data from multiple source systems to a new target system with a different data model. The Data Engineer should have experience in converting Oracle PL/SQL and/or Greenplum code to Databricks. Must have experience - Experience with Data Migrations and Conversion using Databricks . Experience of using Databricks on AWS and managing a Databricks production system is critical and a must have for the project. What you’ll be doing: Databricks Environment Setup: Configure and maintain Databricks clusters ensuring optimal performance and scalability for big data processing and analytics. ETL (Extract Transform Load): Design and implement ETL processes using Databricks notebooks or jobs to process and transform raw data into a usable format for analysis. Data Lake Integration: Work with data lakes and data storage systems to efficiently manage and access large datasets within the Databricks environment. Data Processing and Analysis: Develop and optimize Spark jobs for data processing analysis and machine learning tasks using Databricks notebooks. Collaboration: Collaborate with data scientists data engineers and other stakeholders to understand business requirements and implement solutions. Performance Tuning: Identify and address performance bottlenecks in Databricks jobs and clusters to optimize data processing speed and resource utilization. Security and Compliance: Implement and enforce security measures to protect sensitive data within the Databricks environment ensuring compliance with relevant regulations. Documentation: Maintain documentation for Databricks workflows configurations and best practices to facilitate knowledge sharing and team collaboration. Skills: Apache Spark: Strong expertise in Apache Spark which is the underlying distributed computing engine in Databricks. Databricks Platform: In-depth knowledge of the Databricks platform including its features architecture and administration. Programming Languages: Proficiency in languages such as Python or Scala for developing Spark applications within Databricks. SQL: Strong SQL skills for data manipulation querying and analysis within Databricks notebooks. ETL Tools: Experience with ETL tools and frameworks for efficient data processing and transformation. Data Lake and Storage: Familiarity with data lakes and storage systems such as Delta Lake AWS S3 or Azure Data Lake Storage. Collaboration and Communication: Effective communication and collaboration skills to work with cross-functional teams and stakeholders. Problem Solving: Strong problem-solving skills to troubleshoot issues and optimize Databricks workflows. Version Control: Experience with version control systems (e.g. Git) for managing and tracking changes to Databricks notebooks and code. Role Requirements: Bachelor/Master’s degree in computer science Engineering or related field 7-8 plus years of development experience on ETL tools (4+ years of Databricks is a must have) 5+ years of experience as a Databricks Engineer or similar role. Strong expertise in Apache Spark and hands-on experience with Databricks. More than 7 years of experience performing data reconciliation data validation ETL testing deploying ETL packages and automating ETL jobs developing reconciliation reports. Working knowledge of message-oriented middleware/streaming data technologies such as Kafka Confluent Proficiency in programming languages such as Python or Scala for developing Spark applications. Solid understanding of ETL processes and data modeling concepts. Experience with data lakes and storage systems such as Delta Lake AWS S3 or Azure Data Lake Storage. Strong SQL skills for data manipulation and analysis. Good experience in shell scripting AutoSys Strong Data Modeling Skills Strong analytical skills applied to business software solutions maintenance and/or development Must be able to work with a team to write code review code and work on system operations. Past project experience with Data Conversion and Data Migration Communicate analysis results and ideas to key decision makers including business and technical stakeholders. Experience in developing and deploying data ingestion processing and distribution systems with AWS technologies Experience with using AWS datastores including RDS Postgres S3 or DynamoDB Dev-ops experience using GIT developing deploying code to production Proficient in using AWS Cloud Services for Data Engineering tasks Proficient in programming in Python/shell or other scripting languages for the purpose of data movement Eligible for a US Government issued IRS MBI (candidates with active IRS MBIs will be preferred) Preferred Qualifications Cloud Data Migration and Conversion projects Experience on AWS Databricks industry certifications Job Types: Full-time Contract Pay: $90000.00 - $130000.00 per year Benefits: Dental insurance Flexible schedule Health insurance Life insurance Paid time off Vision insurance Education: Bachelor's (Preferred) License/Certification: Databricks Certified Data Engineer Professional (Required) Security clearance: Secret (Preferred) Work Location: Remote
|
|
Middle Delivery Engineer
masterborn-2 |
Remote
|
Meet MasterBorn a world-class software development company driving success for businesses in FinTech SportsTech and MedTech led by a passionate team committed to client product success.We are currently improving our world-class service with a Delivery Engineer role.As a Delivery Engineer you'll play a pivotal role in ensuring the seamless delivery and optimization of our product for our esteemed retail clientele. Dive into a world of problem-solving and collaboration where your technical prowess and excellent communication skills will be key in driving the success of our service and application features as well as its quality and performance.Salary: PLN 10.000 - 15.000 + VAT (B2B) ➡ ➡ Required: 3 years of commercial experience with databases (SQL database administration performance tuning) Practical knowledge of database scripting languages (T-SQL PL/pgSQL) Experience working with large data sets Experience with system integrations and various data formats (XML JSON CSV Parquet) Practical experience with Python Familiarity with shell scripting languages Experience with version control systems (e.g. Git) Proficiency in Microsoft software suite Experience working directly with clients Polish C1 English B2/C1Offer: B2B/UZL paid days off 100% remote (or hybrid/onsite - as you prefer) full-time position long-term contract polish working hours (10am to 6pm)Perks and Benefits: 750 PLN / quarter for health insurance and sports Mentoring Program remote job offer and moreTools you'll use: MS Office Suite Azure Devops Jira Miro Windows/iOS MS SQL PostgreSQL (Greenplum) Snowflake Python Powershell Your future tasks and duties: ➡ Work with existing retail customers to configure and update the product executing to the documented requirements of the projectExecute the Implementation project through all stagesIngest and map the client’s source data to a standard data model (ETL and ELT process)Effectively translate complex customer requirements recommend system solutions and help formulate detailed specificationsLeverage appropriate resources from company and customer coordinating availability to maximize productivityGain a deep understanding of how our products work how they interact with each other and how to build reliable and reusable processes for support.Act as a problem-solving expert proactively identifying issues analyzing root causes and implementing the solution effectivelyCreate documentation used for an on-going support About you - Tech skills & Experience: ➡ Highly skilled in writing and optimizing complex SQL queriesKnowledgeable in database design administration and performance tuningExperienced in handling large datasets and working with various data formats (JSON XML CSV Parquet)Proficient in Python for automation and data processing tasksFamiliar with shell scripting and version control systems (e.g. Git)Comfortable using Microsoft Office toolsAn effective communicator with experience working directly with clientsFluent in Polish (C1) and proficient in English (B2/C1) About you - Soft skills: ➡ Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to a variety of audiencesAbility to communicate with both technical and non-technical customers on a variety of issuesAbility to select and prioritize tasks within a backlogEnglish level B2 minimum - direct communication/documentation/tasks/other reading/speaking/writingPolish level C1 to communicate efficiently within the teamStrong problem-solving abilities and detail orientation to diagnose issues suggest solutions and make decisions based on requirements Nice to have: ➡ Experience with AzureDevOps and release pipelinesExperience in software implementation or Enterprise SaaS solutions preferably in retailFamiliarity with retail industry and data or hand-on experience working in retailExperience with project management tools (preferably JIRA)Experience working with Snowflake Recruitment process: ➡ CV review by HR TeamPhone call with HR Team (15min)Soft-skills interview with HR Team (1h)Technical interview with Data Team (1h interview + 10min test)Feedback and decision! :) ➡ Want to be proud of the code and product you co-create?Don’t hesitate and apply right away!
|
|
Global System Integration Engineer- Remote
Cognyte |
Remote India
|
Today’s world is crime-riddled. Criminals are everywhere invisible virtual and sophisticated. Traditional ways to prevent and investigate crime and terror are no longer enough… Technology is changing incredibly fast. The criminals know it and they are taking advantage. We know it too. For nearly 30 years the incredible minds at Cognyte around the world have worked closely together and put their expertise to work to keep up with constantly evolving technological and criminal trends and help make the world a safer place with leading investigative analytics software solutions. We are defined by our dedication to doing good and this translates to business success meaningful work friendships a can-do attitude and deep curiosity. We are looking for an accomplished and passionate Global System Integration Engineer to join our cyber intelligence team in India. You will have an opportunity to participate in developing state of the art security technologies and benefit from high-level involvement in major projects that will significantly contribute to making our world safer and more secure. Job Summary Shall be responsible for End-to-End Cognyte Product Integration (Installation Troubleshooting Testing of Cognyte and commercial off-the-shelf equipment SAT & Handover) globally/in-house. The global system integration engineer shall be responsible for representing Cognyte in front of the customer for integration with the customer network HW & SW installations general technical issues on-site trainings QA testing. The job might require 0-20% travel. He should have the ability to work independently at site troubleshoot issues with the help of remote support be able to communicate issues clearly as well as to the customer have overall “system level” technical capabilities. Your impact: End to End responsibility on Global integration and installation of Cognyte products at customer site/Lab. Analysis and troubleshooting skills on Multi Technologies Customer oriented good verbal and written communication skills in English A Self-starter multitasking independent responsible able to take initiative Ability to analyze and diagnose problems cross-system analytic skills Good inter-personal communication with other team players Presentation skills in-front of customers write technical work procedures guides self-learner based on manuals Ability to work under pressure and handle multiple assignments Availability of support beyond office hours Requirements: Your toolbox: Practical engineering degree in electronics or computer science Diploma will also be considered. Mandatory – Experience as an engineer in global company facing with Abroad customers Advantage- Israeli Colleagues. Person who already relocated to other country. At least 4 years of relevant experience supporting of multi technology in the IT Support/Telecom field. Mandatory - Knowledge in Linux Bash/Shell Python scripting Mandatory – Knowledge in OpenShift Dockers Kubernetes Ansible Jenkins (Any Devops tools) Mandatory - Knowledge in VMware vCenter & vStorage Mandatory - Knowledge in Windows OS includes Installation Administration and configuration Mandatory - Knowledge with HW area Dell and HP Servers/ILO/iDRAC/ Raid (All Hardware Generation Servers). Mandatory – Good Written and Oral Communication skills in English Mandatory – Flexible to work on different Time-zone and willing to work beyond office hours if needed. Mandatory – Global Customer Handling Experience Advantage - Knowledge in Big Data Hadoop Cluster Advantage - Knowledge in Storage Installation Administration and Configuration (NetApp/EMC) Advantage - Knowledge in Database – MS SQL Sybase Hive GreenPlum DB Advantage - Knowledge in data security Firewall security protocols Advantage - Knowledge in Virtual Application Load Balancer (Citrix XenApp & AppDirector) Advantage - Knowledge in Cellular networks & Networking LAN WAN & Switching Routers Advantage - Protocols (BRI X25 TCP/IP VOIP MPLS Telephony) Career Page: Customer services Country: India
|
|
Data Engineer
QuadCode |
Remote Greece
|
About the Team We are Quadcode a fintech company excelling in financial brokerage activities and delivering advanced financial products to our global clientele. Our flagship product an internal trading platform is offered as a Software-as-a-Service (SaaS) solution to other brokers. We are currently looking for a Data Engineer to join our Data Platform team. This team plays a crucial role in building and maintaining the company’s analytical platform enabling data-driven decision-making across the business. Our current team consists of: 3 Data Engineers 1 Team Leader We follow Agile practices with daily stand-ups at 12:00 PM (GMT+3) regular peer code reviews and use tools like Slack Google Meet and Zoom for collaboration. Tech Stack:Databases: Greenplum PostgreSQL ClickhouseETL & Orchestration: Airflow DBTProgramming: Python ScalaStreaming & Messaging: Apache Kafka Apache FlinkBI & Visualization: MetabaseStorage & Tools: S3 Datahub Linux ### Tasks: Integrate new data sources into the platform Respond to internal user requests and incidents Co-develop data marts in collaboration with analysts Ensure the completeness and consistency of analytical data. ### What We Expect From You: 1+ years in Data Engineering or 2+ years in Data Analytics Working knowledge of relational databases (Greenplum PostgreSQL Oracle MySQL MS SQL) Solid understanding of DBMS and ETL concepts (ACID normalization CAP theorem OLTP vs OLAP scaling strategies) Proficiency in SQL and query optimization Experience with Linux environments and Docker Good Python skills (OOP data structures decorators venv PEP8) Experience with Airflow and message brokers like Kafka BI tools experience Fluency in Russian and English level B1 or just English B2 or above Familiarity with NoSQL databases (Cassandra Redis Infinispan). ### Nice to Have: Familiarity with NoSQL databases (Cassandra Redis Infinispan) Experience with GitLab CI/CD Grafana Ansible Knowledge of Flink Spark Scala Kubernetes Formal education or certifications in Data Engineering or Data Science ### What we offer: Full-time remote work model (Service Provider). Competitive remuneration. 20 paid days off annually. Flexible working hours. Training and development opportunities. A friendly enjoyable and positive work environment. Currently over 700 employees and service providers are stationed across its seven global offices located in the UK Gibraltar the UAE the Bahamas Australia and the headquarters in Cyprus. By broadening its international presence Quadcode presents a myriad of intriguing tasks and challenges for professionals like developers market research analysts and PR marketing specialists among others. Join us today and let's shape the future of fintech together! Note: All applications will be treated with strict confidence. We thank all applicants for their interest however only those candidates selected for interviews will be contacted. #LI-JM1
|
|
Data Engineer
QuadCode |
Remote Spain
|
About the Team We are Quadcode a fintech company excelling in financial brokerage activities and delivering advanced financial products to our global clientele. Our flagship product an internal trading platform is offered as a Software-as-a-Service (SaaS) solution to other brokers. We are currently looking for a Data Engineer to join our Data Platform team. This team plays a crucial role in building and maintaining the company’s analytical platform enabling data-driven decision-making across the business. Our current team consists of: 3 Data Engineers 1 Team Leader We follow Agile practices with daily stand-ups at 12:00 PM (GMT+3) regular peer code reviews and use tools like Slack Google Meet and Zoom for collaboration. Tech Stack:Databases: Greenplum PostgreSQL ClickhouseETL & Orchestration: Airflow DBTProgramming: Python ScalaStreaming & Messaging: Apache Kafka Apache FlinkBI & Visualization: MetabaseStorage & Tools: S3 Datahub Linux ### Tasks: Integrate new data sources into the platform Respond to internal user requests and incidents Co-develop data marts in collaboration with analysts Ensure the completeness and consistency of analytical data. ### What We Expect From You: 1+ years in Data Engineering or 2+ years in Data Analytics Working knowledge of relational databases (Greenplum PostgreSQL Oracle MySQL MS SQL) Solid understanding of DBMS and ETL concepts (ACID normalization CAP theorem OLTP vs OLAP scaling strategies) Proficiency in SQL and query optimization Experience with Linux environments and Docker Good Python skills (OOP data structures decorators venv PEP8) Experience with Airflow and message brokers like Kafka BI tools experience Fluency in Russian and English level B1 or just English B2 or above Familiarity with NoSQL databases (Cassandra Redis Infinispan). ### Nice to Have: Familiarity with NoSQL databases (Cassandra Redis Infinispan) Experience with GitLab CI/CD Grafana Ansible Knowledge of Flink Spark Scala Kubernetes Formal education or certifications in Data Engineering or Data Science ### What we offer: Remote work model Competitive remuneration Friendly enjoyable and positive environment. Currently over 700 employees and service providers are stationed across its seven global offices located in the UK Gibraltar the UAE the Bahamas Australia and the headquarters in Cyprus. By broadening its international presence Quadcode presents a myriad of intriguing tasks and challenges for professionals like developers market research analysts and PR marketing specialists among others. Join us today and let's shape the future of fintech together! Note: All applications will be treated with strict confidence. We thank all applicants for their interest however only those candidates selected for interviews will be contacted. #LI-LR1
|
|
Data Engineer
QuadCode |
Remote Poland
|
About the Team We are Quadcode a fintech company excelling in financial brokerage activities and delivering advanced financial products to our global clientele. Our flagship product an internal trading platform is offered as a Software-as-a-Service (SaaS) solution to other brokers. We are currently looking for a Data Engineer to join our Data Platform team. This team plays a crucial role in building and maintaining the company’s analytical platform enabling data-driven decision-making across the business. Our current team consists of: 3 Data Engineers 1 Team Leader We follow Agile practices with daily stand-ups at 12:00 PM (GMT+3) regular peer code reviews and use tools like Slack Google Meet and Zoom for collaboration. Tech Stack:Databases: Greenplum PostgreSQL ClickhouseETL & Orchestration: Airflow DBTProgramming: Python ScalaStreaming & Messaging: Apache Kafka Apache FlinkBI & Visualization: MetabaseStorage & Tools: S3 Datahub Linux ### Tasks: Integrate new data sources into the platform Respond to internal user requests and incidents Co-develop data marts in collaboration with analysts Ensure the completeness and consistency of analytical data. ### What We Expect From You: 1+ years in Data Engineering or 2+ years in Data Analytics Working knowledge of relational databases (Greenplum PostgreSQL Oracle MySQL MS SQL) Solid understanding of DBMS and ETL concepts (ACID normalization CAP theorem OLTP vs OLAP scaling strategies) Proficiency in SQL and query optimization Experience with Linux environments and Docker Good Python skills (OOP data structures decorators venv PEP8) Experience with Airflow and message brokers like Kafka BI tools experience Fluency in Russian and English level B1 or just English B2 or above Familiarity with NoSQL databases (Cassandra Redis Infinispan). ### Nice to Have: Familiarity with NoSQL databases (Cassandra Redis Infinispan) Experience with GitLab CI/CD Grafana Ansible Knowledge of Flink Spark Scala Kubernetes Formal education or certifications in Data Engineering or Data Science ### What we offer: Remote work model Competitive remuneration Friendly enjoyable and positive environment. Currently over 700 employees and service providers are stationed across its seven global offices located in the UK Gibraltar the UAE the Bahamas Australia and the headquarters in Cyprus. By broadening its international presence Quadcode presents a myriad of intriguing tasks and challenges for professionals like developers market research analysts and PR marketing specialists among others. Join us today and let's shape the future of fintech together! Note: All applications will be treated with strict confidence. We thank all applicants for their interest however only those candidates selected for interviews will be contacted. #LI-LR1
|
|
Solutions Architect - Public Sector
Snowflake |
Texas, United States
|
Where Data Does More. Join the Snowflake team. We are looking for a Solutions Architect Public Sector to be part of our Professional Services team to deploy cloud products and services for our customers. This person must be a hands-on self-starter who loves solving innovative problems in a fast-paced agile environment. The ideal candidate will have the insight to connect a specific business problem and Snowflake’s solution and communicate that connection and vision to various technical and executive audiences. The person we’re looking for shares our passion for reinventing the data platform and thrives in a dynamic environment. That means having the flexibility and willingness to jump in and get it done to make Snowflake and our customers successful. It means keeping up to date on the ever-evolving data and analytics technologies and working collaboratively with a broad range of people inside and outside the company to be an authoritative resource for Snowflake and its customers. This role is 100% remote and can be based anywhere in the United States. AS A SOLUTIONS ARCHITECT PUBLIC SECTOR AT SNOWFLAKE YOU WILL : Present Snowflake technology and vision to executives and technical contributors to customers specifically in the Public Sector. Position yourself as a Trusted Advisor to key customer stakeholders with a focus on achieving their desired Business Outcomes. Drive project teams towards common goals of accelerating the adoption of Snowflake solutions. Demonstrate and communicate the value of Snowflake technology throughout the engagement from demo to proof of concept to running workshops design sessions and implementation with customers and stakeholders. Create repeatable processes and documentation as a result of customer engagement. Collaborate on and create Industry based solutions that are relevant to other customers in order to drive more value out of Snowflake. Follow best practices including ensuring knowledge transfer so that customers are correctly enabled and can extend the capabilities of Snowflake on their own. Maintain a deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them. OUR IDEAL SOLUTIONS ARCHITECT PUBLIC SECTOR WILL HAVE : Extensive hands on Snowflake experience BA/BS in computer science engineering mathematics or related fields or equivalent practical experience Minimum 5 years of experience as a solutions architect data architect database administrator or data engineer 3 years of technical consulting experience Experience in a lead role for analytics projects for large enterprises Experience implementing and operating Snowflake centric solutions Understanding of complete data analytics stack and workflow from ETL to data platform design to BI and analytics tools Familiarity with industry common BI and ETL solutions Hands-on experience in a technical role leveraging SQL Python Java and/or Spark to build operate and maintain data analytics solutions Extensive knowledge of and experience with large-scale database technology (e.g. Snowflake Netezza Exadata Teradata Greenplum etc.) Proficiency in implementing data security measures access controls and design specifically within the Snowflake platform A STRONG CANDIDATE WILL ADDITIONALLY HAVE : Experience in the services organization of a product company Experience in a leadership position executing technical projects Industry vertical expertise (Media Financial Services Healthcare etc) Application Development experience AWS Google or Microsoft Cloud certification(s) Snowflake Snowpro Advanced Certification(s) Every Snowflake employee is expected to follow the company’s confidentiality and security standards for handling sensitive data. Snowflake employees must abide by the company’s data security plan as an essential part of their duties. It is every employee's duty to keep customer information secure and confidential. Snowflake is growing fast and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values challenge ordinary thinking and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com The following represents the expected range of compensation for this role: The estimated base salary range for this role is $147200 - $205275. Additionally this role is eligible to participate in Snowflake’s bonus and equity plan. The successful candidate’s starting salary will be determined based on permissible non-discriminatory factors such as skills experience and geographic location. This role is also eligible for a competitive benefits package that includes: medical dental vision life and disability insurance 401(k) retirement plan flexible spending & health savings account at least 12 paid holidays paid time off parental leave employee assistance program and other company benefits.
|
* unlock: sign-up for free / login and use the searches from your home page
** job listings updated in real time 🔥
Login & search by other job titles, a specific location or any keyword.
Powerful custom searches are available once you login.