Hadoop (nice to have)
Apache Kafka (nice to have)
About the role
As a Data Engineer working in the newly created Data and Artificial Intelligence Office, you will work with a specialized team to develop a new cutting-edge data application serving all of CentralNic across North America, Europe, Australia, and New Zealand as well as every department including finance, support, product management, sales, and the company’s Executive.
In November 2020, Codewise was acquired by CentralNic Group and now is acting as CentralNic Poland. The hiring process is conducted by CentralNic Poland, but the employment contract will be signed by the global Group entity that is closest to your physical location.
CentralNic (AIM : CNIC) is a global company listed on the London Stock Exchange. CentralNic is an organization and corporation serving millions of web customers around the globe.
Everyday, massive amounts of data are generated throughout the company and the need to collect, collate, analyze, and understand this big data has become a critical need and mandate.
We are a leading provider of tools required to create websites, use email, and secure business online. Headquartered in London, the Group generates revenue and income from the worldwide sale of internet domain names and hosting on an annual subscription basis.
Design, code and maintain data gathering, processing and delivery systems for a wide variety of operational and financial data throughout the global organization
Discover, analyze and validate new data sets for multiple departments and divisions
Contribute to the technical solution starting with the design all the way through to the development of the code
Recommend methods and process that will improve data reliability, efficiency and quality
Participate and follow agile methodology in the development of the application
Follow the existing, and suggest methods for improving, software engineering best practices within the team
Proven experience in software engineering with expertise or interest in data engineering
Experience with Python (awswrangler, pytest, mypy, boto3, pandas, numpy)
Proficient in working with AWS (Redshift, Glue, Athena, Lambda, SNS, CloudFormation)
Experience in writing complex, highly-optimized SQL queries across large data sets
Strong analytical skills related to working with unstructured datasets
Well versed in agile engineering best practices, including version control (GIT), test-driven development and continuous integration
Nice to have
Expertise no-SQL databases
Experience with streaming data e.g. Apache Kafka or Amazon Kinesis
Prior exposure to visualization tools and libraries such as QuickSight or Tableau
Understanding of Big Data processing architectures e.g. Hadoop, Spark, Presto
Bachelor’s degree or higher in Software Engineering, Computer Science, Information Management, Big Data & Analytics, or related field
Interest in data analytics
What we offer
Total compensation package comprised of base salary and variable pay (cash bonus and share options)
Flexible working hours
Training budget and pool of training days
On-line training platform for everyone
Private gym with a trainer
Birthday day off
NAIS (benefit cafeteria with monthly bonuses)
Additional 3 days off