Skip to content

Made by Team Wedgwood

Pipelines, not pottery!

We created an Extract, Transform and Load data pipeline to practise our skills. This was based on the provided totesys database and designed to ensure data integrity, cost-efficiency, and ease of access for business intelligence purposes.

The Team

Arman Mestari

Arman Mestari

With a strong foundation in data engineering and a passion

for turning complex problems into effective solutions, I bring enthusiasm and technical skills to every challenge. My recent training through Northcoders’ intensive boot camp equipped me with expertise in Python, database design, cloud technologies, and problem-solving frameworks. I thrive in dynamic environments where I can leverage my skills and creativity to drive innovation. Eager to contribute to a forward-thinking team, I am committed to continual growth and delivering value as a Junior Data Engineer.

GitHub
Leanne Bibby

Leanne Bibby

I spent 14 years as an academic researcher, author and

lecturer before deciding to upskill and train with Northcoders as a Data Engineer. I am happiest when I have a project, and I wanted to bring my experience to bear on the complex technical problems facing our increasingly data-driven society. At Northcoders, I have nurtured my interest in handling data responsibly and securely, and building solutions imaginatively and in close collaboration with colleagues and stakeholders.

GitHub LinkedIn
Paola Delle Fontane

Paola Delle Fontane

Approachable Data Engineer with a vast experience in the

telecom industry as a Network Engineer and now looking for opportunities in data engineering. I thrive on understanding complex systems and on problem-solving. Extremely keen on expanding my technical skills and share knowledge through collaboration, mentoring, and contributing to team growth. I am an attentive listener who understands other points of view and ensures clear communication. I understand the bigger picture while keeping an eye for details.

GitHub
Rajalakshmi Perumel

Rajalakshmi Perumel

With several years of hands-on experience in both backend

and frontend development, I am skilled in languages such as Python, Java, JavaScript, and SQL. I have a proven track record of delivering scalable solutions in cloud environments like AWS. My expertise includes developing RESTful APIs and implementing best practices in code quality, testing, and DevOps. I am passionate about continuous learning, enjoy collaborating in cross-functional teams, and thrive on solving complex technical challenges. I am committed to writing clean, maintainable code and contributing to projects that make a meaningful impact.

GitHub
Dominic Garvey-Bramwell

Dominic Garvey-Bramwell

I’m a data engineering graduate from Northcoders, where I

gained hands-on experience in building data pipelines, working with cloud infrastructure, and designing scalable data systems. I’m now pursuing a career in data engineering—a field that complements my analytical approach and interest in solving complex problems. I hold a BA in Spanish (Hons) from the University of Leeds and a Master’s in International Politics from the University of Sheffield, bringing a multidisciplinary perspective to technical challenges.

GitHub LinkedIn

Tech Stack

Tech Stack for this group

We used Github, Lambdas, S3, Github Actions, Github Secrets, AWS, AWS Secrets Manager, Terraform, Tableau. These were the most appropriate serverless computing solutions for an efficient and usable pipeline that also allowed us to practise and stretch our skills as Data Engineers.

Challenges Faced

We wish to develop our TDD skills in future projects, using pytest and mocking with Moto. We tested continually using different methods, but we would develop our proper TDD practices in future.

Wedgwood

Made by Team Wedgwood Pipelines, not pottery! We created an Extract, Transform and Load data pipeline to practise our skills. This was based on the provided totesys database and designed to ensure data integrity, cost-efficiency, and ease of access for business intelligence purposes. The Team Arman Mestari With a strong foundation in data engineering and…

Read More

Trent

Made by Team Trent We move data so you don’t have to! Data Engineering showcasing an ETL Process. The project used CI/CD processes and agile methodology. The Team Annette Alcasabas Annette really likes coding in Python and although this was… a challenging project, she liked the whole experience! Eashin Matubber A passionate and results-driven Software…

Read More

Duck

Made by Team Duck Quacking the code, one pipeline at a time Terrific Totes – Cloud-Based Data Pipeline The Team Monika Kaploniak I am an engineer with two years of professional experience… in software development, having transitioned from a successful career in finance. I’ve always been drawn to the dynamic and problem-solving nature of tech,…

Read More

Stoke

Made by Team Stoke Stoke usage has been trending downwards The project was a chance for us to showcase the skills and knowledge we have learnt over the last couple of months during our bootcamp, we did this by creating an Extract, transform, load pipeline, for the minimum viable product of the project specification. we…

Read More

Stoke

Made by Team Stoke Stoke usage has been trending downwards The project was a chance for us to showcase the skills and knowledge we have learnt over the last couple of months during our bootcamp, we did this by creating an Extract, transform, load pipeline, for the minimum viable product of the project specification. we…

Read More

Splash World

Made by Team Splash World Driven by data. Powered by teamwork. We were approached by our client, Terrific Totes, a tote bag retailer, to build a data pipeline that extracts, transforms, and loads (ETL) sales data from their OLTP database (“Totesys”) into an OLAP data warehouse. The aim was to make their sales data more…

Read More

Spitfire

Made by Team Spitfire A Data Engineering project The aim of the project was to apply key skills picked up during the Northcoders bootcamp, to real-world, business requirements. We were tasked with helping a fictional company to create a platform for managing their enterprise data. We implemented a pipeline to move and transform data from…

Read More

Southport Pier

Made by Team Southport Pier It’s more than a bag, it’s a feature! This is an end-to-end ETL pipeline for a tote bag business. It pulls data from their database into a data warehouse for future analysis. In this projet, three lambda applications were created using psycopg2 and boto3. They were deployed in the AWS…

Read More

Royal Blue

Made by Royal Birkdale ain’t no birkdale, we blue The purpose of this repository is to build an entire ETL (Extract, Load, Transform) data pipeline in AWS (Amazon Web Services). Extracting data from an OLTP (Online Transaction Processing Database) PostgreSQL database and loading it into an OLAP (Online Analytical Processing Database) database. The data is…

Read More

Pottery

Made by Team Pottery Team Pottery: Crafting Code, Firing Up Success! Our Terrific Totes Project utilised an ETL pipeline orchestrated by a Step Function triggered every 30 minutes . Our Extract Lambda Handler connects to the Totesys database, checking for new data. Any new data found is then added to our S3 ingestion bucket as…

Read More

Oatcake

Made by Team Oatcake Putting ‘oat’ in tote! Implemented and deployed an automated ETL pipeline, integrated with AWS, for a simulated global tote bag business. The Team Beth Suffield I have a background in Digital Marketing and SEO, and… particularly enjoyed the technical side of these roles. I am especially motivated by the exciting advancements…

Read More

Lawnmower Museum

Made by Team Lawnmower Museum Interested in neither lawnmowers nor museums Our project builds an automated data pipeline using AWS that takes data from a database and applies an Extraction, Transform and Load process to it, or ETL. The code for the AWS Lambdas are stored in a code bucket rather than locally. We also…

Read More