AI for better diagnostics
Dr. Skin began as a project to explore how artificial intelligence could assist in detecting common skin conditions earlier and more accurately. Using a convolutional neural network architecture (ResNet), I trained the model to recognize visual patterns in skin images that may indicate underlying issues. To make the tool accessible, I built a simple web interface where users can upload images and receive model feedback. Recognizing that many people also have questions about their results, I integrated a chatbot that provides additional information and guidance in plain language. The goal was not just to build an algorithm, but to create a resource that bridges the gap between technical diagnostics and real-world usability.
The project also challenged me to think about how technology can be made more approachable for everyday users. Beyond the technical side of training and validating the model, I focused on designing an interface that encourages curiosity rather than intimidation. By combining machine learning with user-centered design, Dr. Skin demonstrates how AI can extend beyond research papers and labs into tools that people can interact with directly. It’s a step toward showing how innovation can make healthcare not only more accurate, but also more approachable and accessible.
AI-Driven Detection of Iron Deficiency: Harnessing Visual Cues for Early Diagnosis and Anemia Prevention
Iron deficiency is one of the most common and widespread micronutrient deficiencies, leading to conditions like anemia, fatigue, and cognitive impairments. Despite its prevalence, the subtlety of symptoms and challenges in accessing regular diagnostic testing often result in delayed detection and treatment. Traditional methods of diagnosing iron deficiency typically involve blood tests, which can be inaccessible, costly, and time-consuming. To address this issue, we propose an innovative approach using a Convolutional Neural Network (CNN) driven analysis of visual cues to detect iron deficiencies early and efficiently.
By leveraging machine learning techniques, this method will assess biomarkers such as nail health, eye appearance, and tongue discoloration, which may correlate with iron deficiency. Through non-invasive data collection, this AI model will aim to provide a rapid, accessible, and cost-effective diagnostic tool, made easily available via a mobile app. The potential to revolutionize early and accessible detection could enable timely intervention and help reduce the global health burden of undiagnosed iron deficiencies.
Voice controlled garage door Amazon Alexa skill
Convenience often hides real computer-science challenges—APIs, security, event-driven logic, and user-focused design. My Garage Door Skill began as a weekend test: Could I open, close, and check my family’s garage door with nothing more than “Alexa, ask Garage to…”? The answer was yes, and the journey was a full-stack workout.
At its core is a lightweight Python function running on AWS Lambda. Lambda’s serverless setup fits perfectly: it goes idle (and costs almost nothing) when no one’s speaking and scales instantly when someone does. The skill handles three request types: Launch, Intent, and SessionEnd. Launch offers a friendly greeting; Intent maps phrases like “door one open” into Python calls that hit Chamberlain’s MyQ REST API to validate users, list devices, or change door states.
Security comes first. All secrets—MyQ login, Alexa skill ID, even a toggle that can disable the open command—live in encrypted environment variables. Each invocation double-checks the application ID and grabs a short-lived MyQ token before touching anything mechanical. If MyQ rotates keys, I just update an env var; no code rewrite required.
Voice UX pushed me to think about edge cases. Someone might say “door to” instead of “door two,” or try to open the door while it’s already moving. The skill translates synonyms like “shut,” “down,” and “status” into clear commands and gently corrects anything that doesn’t parse. If the NO_OPEN flag is on, it politely refuses open requests—a safety rule I added after witnessing a false trigger in another smart-home setup.
Defensive coding matters. MyQ sometimes times out or returns a 429. Instead of crashing, the skill logs the hiccup to CloudWatch and tells the user, “I couldn’t reach the door; please try again.” Structured JSON logs made real-device debugging far easier than hunting through random print statements.
My favorite moment is hearing an LED click in the garage while Alexa says, “Okay, closing garage door now.” A few hundred lines of Python knit together cloud infrastructure, REST calls, and everyday life into one smooth interaction—proof that thoughtful code can make small chores vanish.
Building this skill reminded me why I’m drawn to hands-on computer science and AI: it’s less about sci-fi headlines and more about removing tiny bits of friction from daily routines in ways that feel almost invisible. And in a house where I’m known for both late-night coding sessions and mid-morning snack experiments, a voice-activated garage door adds just the right dash of everyday magic.
Rank-choice voting looks simple on paper—voters rank names, you drop the lowest scorer each round, and someone eventually passes 50 percent—but the bookkeeping underneath can get hairy. I wrote this C program to watch the entire process unfold in real time, ballot by ballot. At launch you feed it a list of candidates from the command line, and it immediately builds an in-memory roster where each record stores a name, a running vote total, and a Boolean flag that says whether that person has been knocked out of the race. Voters then type their first, second, third choices, and so on; each answer is converted into an index and slotted into a two-dimensional array that looks a lot like a spreadsheet. One row equals one voter, and the columns represent that voter’s ranked preferences.
The real action starts after the polls “close.” A tabulation pass sweeps through every row, awarding a single vote to each ballot’s top-ranked candidate who is still alive. If any name now holds more than half the total votes, the program prints the winner and exits. Otherwise it hunts for the smallest vote tally among the survivors. Anyone sitting at that minimum is either eliminated on the spot or, if every remaining contender is tied at that same number, declared part of a multi-way tie that ends the election. Whenever someone is removed, the vote counts are reset to zero and the whole tabulation cycle begins again, so second- and third-choice preferences can flow to the top. Watching the loop iterate is oddly dramatic: numbers climb, a candidate drops, ballots reshuffle, and momentum shifts until a single name finally clears the majority bar.
Writing the code was an exercise in trust but verify. Edge-case checks stop the program the moment a voter misspells a name, sparing me from obscure seg-faults later on. Bounds-guard constants—100 voters, 9 candidates—keep the arrays from running off the rails while still being easy to tweak. Most satisfying of all was seeing how a few helper functions (find_min, is_tie, eliminate) turned a dense algorithm into something I could read aloud to a friend without losing the plot.
The project left me with a deeper appreciation for how algorithms shape outcomes long after the ballots are cast. Small design choices—like breaking ties by declaring co-winners rather than flipping a coin—can echo loudly in real elections. In code, at least, every assumption is laid bare and every edge case has to be confronted head-on. That mix of logic, transparency, and just a hint of suspense is exactly why I keep coming back to computer science.
To build this project, I worked step by step:
Created a personal website on weebly.com to act as the base platform.
Designed and trained an IBM Watson Assistant chatbot so it could answer questions based on my site’s content.
Integrated the chatbot onto the website, making it interactive for visitors.
Extended access through SMS (disabled currently), setting up a channel so people could text the bot directly.
Tested and improved the chatbot, retraining it on questions it could not answer at first.