GoDaddy powers the world's largest cloud platform dedicated to small, independent ventures. With nearly 18 million customers worldwide and over 77 million domain names under management, GoDaddy is the place people come to name their idea, build a professional website, attract customers and manage their work.
Our mission is to give our customers the tools, insights and the people to transform their ideas and personal initiative into success.
To learn more about the company visit
As a data-driven company, GoDaddy, is continually seeking talented and highly-motivated Senior Data Engineers to join our Analytics team.
On our team, you will join forces with leaders of our product, platform, website, customer experience, and marketing teams and will be integral to our company growth through the engineering solutions you produce.
This role is all about powering insights and action with data. This spans how we bring in data, how we transform and model it, and how we power customer experiences with it.
We are seeking team members that are passionate, extraordinary, analytical, strategic, and inquisitive. The successful candidate will be an out of the box thinker, taking innovative and creative approaches to solving complex problems.
The right person will raise the profile and excellence of our entire team. If this is you, we want to hear your story! Come join our extraordinary team and help us make our customers’ big ideas come to life!
In this role you will :
Be responsible for extracting and analyzing large amounts of data across internal and external data sources with a primary focus on supporting ongoing modeling activities
Demonstrate knowledge of distributed computing and complex data structures to extract and model data into consumable matrices for model estimation and real-
time scoring and decisioning
Understand the end-goal of data and models to provide your own insights and recommendations for improving customer value
Provide critical production support of your team’s existing production processes to ensure the reliability of the modeling and scoring platforms
Work with internal teams to architect technical solutions that produce delightful customer experiences
Design and implement solutions that capture data once considered to be impossible or too expensive to acquire
Practice good engineering standards and work well within an agile process to produce long-lasting and high-fidelity solutions
Care more about the optimal solution than the process while working efficiently to incur the minimum amount of technical debt
Break large problems into small components that provide interim progress with respect to serving our customers
What you need :
5 plus years in data environments and practical knowledge of complex semi-structured (JSON) or low-structure data types (raw text)
Public cloud experience Amazon AWS (preferred), Google Cloud, Azure
Experience working with streaming data platforms (Kafka, AWS Kinesis, Google Pub / Sub)
Experience implementing, and tuning, processes executed over distributed systems like Apache Spark / Flink, both in batch and streaming using Apache Beam library
Familiarity and experience with a broad spectrum of data access patterns, including those optimal over a variety of databases such as Apache Cassandra, Redis, Hadoop, MSQL, MySQL, Teradata, Redshift, DynamoDB, etc.
Prior experience extracting, modeling and presenting data that provides valuable insights to the enterprise
Experience with deep learning libraries such as TensorFlow and PyTorch
Data query languages HIVEQL, SQL, PIG
Programming languages such as Scala, Python, and Java
You have :
Ability to work autonomously and make critical decisions independently
Proven communication and collaboration skills
Willingness to take the initiative to contribute beyond basic responsibilities