Video Streaming Application

BCIT February 2024

Overview

The project is to build a containerized Video Streaming System using Docker. The system consists of various microservices that handle different aspects of video uploading and streaming. Docker is used to manage the deployment of each service, ensuring consistency and ease of deployment.

The system utilizes MySQL as the database, with Express for the backend, React for the frontend, and Node.js for server-side logic. This combination of technologies provides a modern, efficient, and scalable approach to developing the video streaming platform.

Project Scope

The scope of the project includes:

Process

The development process began with a decision to use Express.js for the backend and React for the frontend.

The first microservice created was the authentication service, which supports both GET and POST requests for existing and new users, respectively. The service accepts a username and password and performs user verification and account creation as necessary. If the POST request finds an existing username in the database, it returns an error; otherwise, it creates a new account with the provided username and password, returning a status code of 201. The GET request verifies the username and password combination and returns a status code of 200 if successful or 403 in case of any mismatch.

Next, the upload service was developed with a POST request that accepts a title and a video object. Using the Multer middleware, MP4 files from the video object are stored in a local folder called "videos." The service saves the title and the absolute file path of the MP4 file in the database. If the title already exists, the service returns a 400 status code; otherwise, it returns 201.

The streaming service supports a GET request that accepts a title parameter and queries the database for the corresponding video. If the title is found, it returns the absolute file path of the video along with a status code of 200; if not, it returns a 400 status code.

The frontend consists of pages for each service as well as a 404 page. It uses React context to maintain consistent authentication status across different pages. The streaming page employs React Player to play the MP4 files.

Docker was used to containerize the entire system. Dockerfiles for each service were created using the node:18-alpine image as the base. A Docker Compose file was used to mount volumes for the authentication database and the video database, as well as the "videos" folder. This allowed the upload, streaming, and frontend services to access the MP4 videos stored in the folder.

Learnings

Implementing Microservice Architecture: Working with a microservice architecture allowed me to understand the benefits of breaking down a system into smaller, independent services that communicate through APIs. This approach improved the modularity and maintainability of the system, and it facilitated the scalability of individual services as needed. It also provided opportunities to practice designing clean, consistent APIs for each service and ensuring seamless interaction between them.

Implementing Meaningful and Proper Error Handling in Backend Code: Developing error handling strategies for the backend code was an important learning experience. I learned how to identify potential failure points and implement meaningful error messages and appropriate HTTP status codes to guide the frontend and other services in handling different scenarios. This included managing errors related to data validation, database interactions, and file operations. As a result, the system is more robust and user-friendly, with clear error messaging and reliable behavior.

Creating and Managing Shared Docker Volumes Over Multiple Services: Working with Docker volumes was a valuable experience, as it allowed me to share data between different services while maintaining isolation and consistency. I learned how to create, mount, and manage shared volumes across multiple services, enabling efficient data sharing such as video files and databases. This experience also highlighted the importance of configuring permissions and ensuring data security within the shared volumes.

Configuring Docker Compose: Utilizing Docker Compose was instrumental in orchestrating and managing the different services within the system. I gained experience writing docker-compose.yml files to define services, networks, and shared volumes. This practice helped streamline the process of deploying, scaling, and maintaining the various microservices. It also taught me how to configure dependencies and inter-service communication within the Docker environment, improving the ease of deployment and system management.