You will develop, test and also maintain data architectures to keep this data accessible and ready for analysis. Among your tasks, you will do Data Modelling, ETL (Extraction Transformation and Load), Data Architecture Construction and Development and also Testing of the Database Architecture.
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement process improvements : automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud big data’ technologies.
Build / use analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Real impact one step at a time
The impact will imply the project's context and will also go beyond this, with the Competence Area community that you will be part of, with a strong focus on your technical skills.
You will have access to AI Community trainings and programs emphasizing skills on the technical and tactical side, while you will be engaged within new projects and opportunities landing in our business line.
The community consists of Data Scientists and Machine Learning Engineers, along with Data Engineers sharing knowledge and projects'
insights on a regular basis. We engage in projects pertaining to Computer Vision, NLP, Advanced Analytics, Preventions and Trends Analysis.
2+ years of professional experience
Experience building and optimizing big data’ data pipelines, architectures and data sets
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets
Build processes supporting data transformation, data structures, metadata, dependency and workload management
Knowledge of manipulating, processing and extracting value from large disconnected datasets
Working knowledge of message queuing, stream processing, and highly scalable big data’ data store
Experience with :
big data tools : Apache Spark (preferred), Hadoop, Kafka, etc.
Google (preferred), Azure Cloud services
Stream-processing systems : Storm, Spark-Streaming, etc.
object function scripting / object oriented languages : Scala (preferred), Python, Java, C++, etc.
Willing to develop
Relational SQL and NoSQL databases, including Postgres and Cassandra
Data pipeline and workflow management tools : Azkaban, Luigi, Airflow, etc.
Extensive knowledge of Visualization tools : PowerBI, Tableau, etc
At Accesa & RARo you can :
Enjoy our holistic benefits program that covers the four pillars that we believe come together to support our wellbeing, covering social, physical, emotional wellbeing, as well as work-life fusion.
Physical : premium medical package for both our colleagues and their children, dental coverage up to a yearly amount, eyeglasses reimbursement every two years, voucher for sport equipment expenses, in-house personal trainer
Emotional : individual therapy sessions with a certified psychotherapist, webinars on self-development topics
Social : virtual activities, sports challenges, special occasions get-togethers
Work-life fusion : yearly increase in days off, flexible working schedule, birthday, holiday and loyalty gifts for major milestones, work from home bonuses