Team Player (regular)
Analytical Thinking (regular)
Communication Skills (regular)
Experience working with data streaming technologies (regular)
Strong experience using big data technologies (regular)
We are looking for a Middle Python Developer to join our team. As a member of the Engineering team, you will work closely with data scientists and play a key role in designing and building state-of-the-art data platforms supporting the company’s predictive models of training and inferencing tasks.
Welcome Bonus $3000
The Data Engineer needs to be able to work with internal team members to clarify data requirements and work with software engineers on design and implementation details to achieve high-quality data ingestion goals.
You need a passion for complex problems and enjoy the challenge of operating complex events and data streaming pipelines.
In this role, you will be a technical expert with significant scope and impact. You will work closely with a group of Software Engineers, Product Managers, Data Scientists, and DevOps Engineers to create the data infrastructure and pipelines necessary to drive initiatives.
Work at Exadel - Who We Are :
Since 1998, Exadel has been engineering its own software products and custom software for clients of all sizes. Headquartered in Walnut Creek, California, Exadel currently has 2500+ employees in development centers across Americas, Europe, and Asia.
Our people drive Exadel’s success, and they are at the core of our values, so Exadel is a people-first cultured company.
About Our Customer :
The сustomer is a leading provider of vehicle lifecycle solutions, enabling the companies that build, insure, repair, and replace vehicles to power the next generation of transportation.
The company delivers advanced mobile, artificial intelligence, and connected car technologies through its platform, connecting a vibrant network of 350+ insurance companies, 24,000+ repair facilities, OEMs, hundreds of parts suppliers, and dozens of third-party data and service providers.
The customer's collective set of solutions inform decision-making, enhance productivity, and help clients deliver faster and better experiences for end consumers.
The сustomer’s company was ranked #17 in the Top 100 Digital Companies in Chicago in 2020 by Built in Chicago, an online community for digital technology entrepreneurs in Chicago, and was named one of Forbes best mid-sized companies to work for in 2019 an important accolade and retention tool for the 2,600+ full-time company employees (alongside 350 dedicated contractors).
The сompany’s corporate headquarters is in downtown Chicago in the historic Merchandise Mart a certified LEED (Leadership in Energy and Environmental Design) building that is also known to be a technology hub within the broader metro.
About Our Project :
Safekeep is a business unit that focuses on automating insurance claim subrogation for auto, workers compensation, property, and general liability claims.
SafeKeep leverages data analysis and AI / Machine learning packaged into a smart workflow engine to :
Identify claims with subrogation potential
Minimize the impact of subrogation team turnover
Decrease the administrative costs of subrogation
Simplify and optimize cross-carrier interaction
UI for the smart workflow engine is built specifically for subrogation business processes, bringing together data for decision-making about and tracking of the recovery process.
This includes demand packages, settlements, and payments.
SafeKeep provides a digital platform for carrier-to-carrier collaboration to standardize and streamline the interaction.
In 2020 alone, SafeKeep won the following awards :
Innovation Championship by Zurich - out of 1300 solutions worldwide, SafeKeep won 1st place.
Innovation in Insurance awards - Out of 359 innovations from 45 different countries, Safekeep won the Global Silver Award, one of "the 3 best innovations at a global level" in InsurTech
Plug and Play Insurance Partners voted Safekeep as the #1 InsurTech
Project Teams (4 Scrum teams) :
2 Teams of full-stack engineers focused on Claimflo (core subrogation workflow app)
1 Team of data engineers focused on ingestion pipelines and integration with carriers (3 data engineers based in US)
1 Team of data scientists focused on building a subrogation opportunity detection model
3+ years of industry experience in Software Development, Data Engineering, Business Intelligence, or a related field with a solid track record of manipulating, processing, and extracting values from large datasets
Hands-on experience and highly advanced knowledge of SQL / Non-relational, Data Modeling, ETL Development leveraging Python, and Data Warehousing
Strong experience using big data technologies (Postgresql, Hadoop, Hive, Hbase, Spark, etc.)
Experience working with data streaming technologies (Kafka, Spark Streaming, etc.)
Knowledge and experience with Data Management and Data Storage best practices
Proficiency in both written and verbal communication, sufficient for success in a remote and largely asynchronous work environment
Demonstrated capacity to clearly and concisely communicate about complex technical, architectural, and / or organizational problems and propose thorough iterative solutions
Experience with performance and optimization problems and a demonstrated ability to both diagnose and prevent these problems
Experience owning a feature from concept to production, including proposal, discussion, and execution
Self-motivated and self-managed with strong organizational skills
English level : Intermediate+
Nice to Have :
Bachelor's degree in Engineering, Mathematics, or a related technical discipline
Master’s degree in computer science, mathematics, statistics, or other quantitative fields
Background in financial services including banking, insurance, or an equivalent
Degree in Computer Science, Engineering, Mathematics, or a related field and 7+ years of industry experience
Experience working with AWS big data technologies (Redshift, S3, EMR, Glue, etc.)
Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
Design, implement, and support scalable multi-tenant data infrastructure solutions to integrate with multi heterogeneous data sources;
aggregate and retrieve data in a fast and secure mode; curate data that can be used in reporting, analysis, machine learning models, and ad-hoc data requests
Designing and implementing complex ingestion and analysis pipelines and other BI solutions
Interface with other engineering and ML teams to extract, transform, and load data from a wide variety of data sources using SQL / Non-relational and big data technologies
Advantages of Working with Exadel :
You can build your expertise with our Client Engagement team, who provide assistance with existing and potential projects
You can join any Exadel Community or create your own to communicate with like-minded colleagues
You can participate in continuing education as a mentor or speaker. You will not only be emotionally but also financially rewarded for mentoring
You can take part in internal and external meetups as a speaker or listener. We support you in broadening your horizons and encourage knowledge sharing for all of our employees
You can learn English with the support of native speakers
You can take part in cultural, sporting, charity, and entertainment events
Working at Exadel means always upgrading your skills and proficiency, so we provide plenty of opportunities for professional development.
If you’re looking for a challenge that will lead you to the next level of your career, you’ve found the right place
We work hard to ensure honest and open relations between employees and leadership, so our offices are friendly environments