Team Members

Mohamed Ashraf

Team Leader

Abdelrahman Sayed

Team Member

Ahmed Ashraf

Team Member

Mohammed Tarek

Team Member


Essam Eliwa

Assistant Professor

Shereen Elbohy

Teaching Assistant

Mahmoud Heidar

Teaching Assistant


Manual assignment marking makes it harder to provide immediate and meaningful feedback. EvalSeer is a gamified learning management system with code auto-marking.In a competitive context, it provides rapid feedback, a scoreboard, badges, and score-based assignments.EvalSeer will provide ongoing challenges to keep students engaged while also using automated evaluation to provide real-time visual feedback on their progress. Students will be given points based on their success in activities and assignments, which they will be able to use to gain an advantage.The assessment procedure includes a number of factors aimed at improving the student’s coding skills, including coding style, utilized code features, logical organization through dynamic test cases, and successful compilation.EvalSeer utilizes neural networks to detect probable syntax mistakes and proposes a repair along with an explanation of why the fix was proposed, as well as external connections to further information on the subject.

System Objectives

• The plagiarism technique is implemented by the Rabin karb algorithm used by the Jplag tool discussed by Lutz Prechelt et al [9] and outputs the percentage of plagiarism detected. Another technique that shall be implemented is a machine learning technique that provides more accurate plagiarism percentage on student submitted assignments by performing some repossessing on the input data before calculating the output percentage.
• The instructor shall be able to enter test cases for the process of automatic student evaluation by giving English natural language words that shall automatically produce test cases or basic test cases. This shall help them understand the problem they shall be tackling by providing a sample input and output provided by the teacher or by producing test cases.
• Test cases shall be created in two ways: automatically or manually. Manual development
of test cases has a variety of drawbacks, including time consumption, mistake proneness,
and a high likelihood of missing critical system instances. Whereas automatic test case
development overcomes the issues associated with manual testing.
• Students can use their understanding to learn from their mistakes if they know exactly what 
went wrong with their submissions and where their program failed, allowing them to improve their program after obtaining feedback on their assignments. To ensure that feedback is timely, complete, and effective, EvalSeer should compare feedback given to the same student on different assignments, allowing instructors to follow each student’s development and performance from one assignment to the next.
• By integrating a gamified environment within the EvalSeer system. Students on the overall leaderboard shall compete with one badges that provide exceptional privileges. Students shall be awarded in the form of points based on their success on homework assignments submitted. Holders of unique badges shall be awarded for keeping the badge for a specifiedperiod of time .

System Scope

• EvalSeer seeks to offer students with immediate feedback on their work.

• Many parts of the assignment, including as compilation, style, logic, and feature testing, which are graded by the system.
– Syntax Mistakes: Using an LSTM network, the system will detect possible syntax errors and provide a repair, which will be accompanied by an example and an explanation with sources.
– Style Check: The system will check the C++ code style using a Google-developed tool called Cpplint, as well as JAVA code style using Checkstyle.
– Logic Check: The system will check the student’s code by executing test cases on it using inputs and anticipated output given by the teacher, using dynamic test cases.
– Compilation: The system uses the MinGW 64 C++ compiler to compile the student’s code, as well as JAVAC compiler for JAVA codes
– Feature Test: The system will look for features specified by the teacher, such as loops and conditions, in the student’s code.

• Gamified Environment
– Holders of exclusive badges will be rewarded for holding the badge for a set amount of time.
– Creating a competitive atmosphere with the course and general leaderboards that decide the top assignment scores.
– Students on the general leaderboard will compete for unique badges that grant special benefits.
– Students will be rewarded in the form of coins based on their performance on submitted homework.
– When students achieve particular milestones, such as submitting their assignments one day before the deadline or submitting their perfect submissions in a row, they will be awarded badges.

• Expected Outcome:
– Instructors will save time and work by not having to mark each assignment individually.
– Students receive immediate formative feedback on each facet of the assignment’s grading criteria.
– Due to formative feedback on assignments, students’ programming abilities are being improved.
– Using a gamified environment to facilitate learning and provide a demanding environment for students.

Documents and Presentations


You will find here the documents and presentation for our proposal.




You will find here the documents and presentation for our SRS.




You will find here the documents and presentation for our SDD.




You will find here the documents and presentation for our Thesis






Paper Title

Competition Title

Detail Text