Skip to content

Made by Team Duck

Quacking the code, one pipeline at a time

Terrific Totes – Cloud-Based Data Pipeline

The Team

Monika Kaploniak

Monika Kaploniak

I am an engineer with two years of professional experience

in software development, having transitioned from a successful career in finance. I’ve always been drawn to the dynamic and problem-solving nature of tech, and recently, I’ve developed a growing interest in data engineering. This motivated me to join the Northcoder’s Data Engineering Bootcamp to strengthen my skills in the field. The opportunity to work with data excites me, and I’m eager to tackle complex, data-driven challenges. With strong analytical skills from both software development and finance, I’m ready to apply them in a data engineering role and contribute to innovative projects.

LinkedIn
Rita

Rita

Rita is a Software (Data Engineer) who has an interest in

DevOps.

GitHub
Timothy Monaghan

Timothy Monaghan

Timothy is a biology graduate and Junior Data Engineer with

experience in film, TV and sports broadcasting.

GitHub
David Yuan

David Yuan

David is a Junior Data Engineer.

GitHub

Tech Stack

Tech Stack for this group

We used AWS, Terraform, Python, Pandas, PostgresSQL, SQL etc. Latest tech-stack, and moving towards Cloud

Challenges Faced

Various, that we overcome or have learnt for the next iteration of the MVP.

Duck

Made by Team Duck Quacking the code, one pipeline at a time Terrific Totes – Cloud-Based Data Pipeline The Team Monika Kaploniak I am an engineer with two years of professional experience… in software development, having transitioned from a successful career in finance. I’ve always been drawn to the dynamic and problem-solving nature of tech,…

Read More

Stoke

Made by Team Stoke Stoke usage has been trending downwards The project was a chance for us to showcase the skills and knowledge we have learnt over the last couple of months during our bootcamp, we did this by creating an Extract, transform, load pipeline, for the minimum viable product of the project specification. we…

Read More

Stoke

Made by Team Stoke Stoke usage has been trending downwards The project was a chance for us to showcase the skills and knowledge we have learnt over the last couple of months during our bootcamp, we did this by creating an Extract, transform, load pipeline, for the minimum viable product of the project specification. we…

Read More

Splash World

Made by Team Splash World Driven by data. Powered by teamwork. We were approached by our client, Terrific Totes, a tote bag retailer, to build a data pipeline that extracts, transforms, and loads (ETL) sales data from their OLTP database (“Totesys”) into an OLAP data warehouse. The aim was to make their sales data more…

Read More

Spitfire

Made by Team Spitfire A Data Engineering project The aim of the project was to apply key skills picked up during the Northcoders bootcamp, to real-world, business requirements. We were tasked with helping a fictional company to create a platform for managing their enterprise data. We implemented a pipeline to move and transform data from…

Read More

Southport Pier

Made by Team Southport Pier It’s more than a bag, it’s a feature! This is an end-to-end ETL pipeline for a tote bag business. It pulls data from their database into a data warehouse for future analysis. In this projet, three lambda applications were created using psycopg2 and boto3. They were deployed in the AWS…

Read More

Royal Blue

Made by Royal Birkdale ain’t no birkdale, we blue The purpose of this repository is to build an entire ETL (Extract, Load, Transform) data pipeline in AWS (Amazon Web Services). Extracting data from an OLTP (Online Transaction Processing Database) PostgreSQL database and loading it into an OLAP (Online Analytical Processing Database) database. The data is…

Read More

Pottery

Made by Team Pottery Team Pottery: Crafting Code, Firing Up Success! Our Terrific Totes Project utilised an ETL pipeline orchestrated by a Step Function triggered every 30 minutes . Our Extract Lambda Handler connects to the Totesys database, checking for new data. Any new data found is then added to our S3 ingestion bucket as…

Read More

Oatcake

Made by Team Oatcake Putting ‘oat’ in tote! Implemented and deployed an automated ETL pipeline, integrated with AWS, for a simulated global tote bag business. The Team Beth Suffield I have a background in Digital Marketing and SEO, and… particularly enjoyed the technical side of these roles. I am especially motivated by the exciting advancements…

Read More

Lawnmower Museum

Made by Team Lawnmower Museum Interested in neither lawnmowers nor museums Our project builds an automated data pipeline using AWS that takes data from a database and applies an Extraction, Transform and Load process to it, or ETL. The code for the AWS Lambdas are stored in a code bucket rather than locally. We also…

Read More

Funland

Made by Team Funland Give us your data, we will give you a warehouse. This repository contains the final team project of the North Coders Data Engineering Bootcamp, showcasing a full-stack ETL (Extract, Transform, Load) pipeline designed for real-world data engineering practice. – Data ingestion from PostgreSQL into AWS S3 data lakes. – Transformation into…

Read More

Culinary

Made by Team Culinary Team Get it DONE This group project was carried out during the final phase of the 13-week Northcoders Data Engineering Bootcamp. The aim was to design and build a reliable data platform that extracts data from a PostgreSQL operational database (Totesys), transforms it into a denormalised form star schema, and loads…

Read More