STEM Research Webinar 301: Engineering Projects, Research Topics, and Science Project Success

Published by

on

STEM Research Webinar 301

Mentors: Dr. Karthik S (Engineering & CFD) · Dr. Clark Roberts (Science Project Success)
Published: March 15, 2026 | Audience: High School Students

What This Webinar Covers

STEM Research Webinar 301 is the most advanced session in Future Forward Labs’ Q1 2026 series. Two expert mentors cover two completely different but equally important parts of doing a great science project.

Dr. Karthik S covers how to build real engineering projects at home using Computational Fluid Dynamics (CFD) — what it is, how to set it up, run simulations, measure results, and improve your design using physics rather than guesswork.

Clark Roberts covers the science of doing science — how to pick a topic, what research approach to choose, how to work with data honestly, where machine learning fits in, and what the ideal end goal for any science project actually looks like.

Together, they give you both the technical toolkit and the thinking framework to build a project that stands out.

Watch Dr Karthik’s track here 👇

PART ONE: Engineering Projects at Home — CFD from the Ground Up

Dr. Karthik S, Aerospace & Engineering Mentor, Future Forward Labs

What Is CFD — And Why Should You Care?

Computational Fluid Dynamics (CFD) lets you simulate how air or water flows around objects using a computer — a virtual wind tunnel on your laptop. The key number you track is Cd, the drag coefficient: lower Cd means less air resistance.

Cd = FD / (½ρU²A)

For a science fair student, CFD is powerful because you can test designs impossible to build physically, generate real engineering data, and — when combined with hands-on experiments — show judges a depth of scientific thinking that is rare among student projects.

Before You Simulate: Mesh Generation

Before your simulation runs, you need a mesh — a grid of cells around your object. Mesh quality determines result accuracy more than any solver setting.

There are three mesh types: Structured (Hex) for excellent near-wall accuracy; Unstructured (Tet/Poly) for complex shapes, faster to generate; and Hybrid, which combines both and is most common in real-world CFD.

Key parameters: y⁺ (wall unit, controls surface resolution), boundary layer first cell height, growth rate (keep 1.1–1.2×), and grid independence — your results should stop changing when you refine the mesh. When ΔCd < 0.5% between refinements, your mesh is trustworthy.

Simulation Setup: Physics Models and Boundary Conditions

The solver is only as good as the physics model and boundary conditions you give it.

For turbulence models, most student projects should use RANS SST k-ω — the industry standard for external aerodynamics. Set your inlet to velocity or pressure far-field, outlet 15–20 body lengths downstream, and a no-slip wall on the body. For ground vehicles, use a moving ground plane.

Monitor Cd and Cl residuals down to less than 10⁻³ — but always check force history separately. Residuals dropping does not mean your solution has converged if forces are still drifting.

The CFD Iteration Cycle

The core workflow is a four-step loop — each pass tightens the gap between your current design and optimal drag:

BUILD / PREP → SIMULATE → MEASURE → ITERATE
↑ |
└────────────────────────────────────────┘

Phase 1 — Build & Prepare

Use parametric geometry so you can adjust a dimension and regenerate automatically. Always run a baseline case first on a known reference shape and validate Cd within 5% of published data — skip this and every result downstream is suspect.

Organize every run in separate folders (/case_v01/, /case_v02/). Never overwrite files. Domain sizing: inlet 10× body length upstream, outlet 20× downstream, sides 5× from the body.

Phase 2 — Simulate

Monitor five quantities in real time: Cd, Cl, residuals (should drop 4–6 orders of magnitude), boundary layer resolution, and Cp.

Watch for the five common pitfalls:

PitfallFix
False convergencePlot force history — residuals alone are not enough
Numerical diffusionUse 2nd-order schemes for final runs
Domain blockage (>5%)Increase domain size
Reversed flow at outletMove outlet downstream or switch BC
Over-relaxed solverTighten settings for production runs

Phase 3 — Measure

Track three metrics after every run: Cd (overall drag), ΔCd (change from baseline — target less than −0.01 per iteration), and Cp (where pressure is generating drag).

Post-processing checklist: split pressure drag from friction drag, visualize flow separation zones, plot wake at 1, 5, and 10 chord lengths, and compare against any available wind tunnel data. A discrepancy greater than 10% signals a mesh or model issue — fix it before changing geometry.

Phase 4 — Iterate

Intelligent iteration is guided by flow physics, not random geometry changes. The five design levers: streamlining the forebody, trailing edge modifications like boat-tailing and Kammback designs, boundary layer management with vortex generators or riblets, underbody smoothing with diffusers, and angle-of-attack sweep for airfoils.

Version control every iteration: geometry + mesh + solver settings + results summary, one commit per change. Without this, your Cd improvements are unverifiable.

Key Takeaways from Dr. Karthik’s CFD Session

  1. Geometry is your hypothesis — parameterize it before you simulate it
  2. A validated baseline Cd is non-negotiable before any optimization begins
  3. Mesh quality determines result accuracy more than solver choice
  4. Decompose drag: pressure vs. friction vs. induced — optimize the dominant term first
  5. Version-control geometry, mesh, and solver settings for every iteration

“CFD is a tool. Physics-guided iteration is the method.”

CFD and Hands-On Experiments: The Winning Combination

The most competitive science fair projects combine computational simulation with physical experiments. Dr. Karthik showed a home-built wind tunnel — cardboard, foam, a small fan — used to physically measure drag on model shapes, then simulate those same shapes in CFD.

Why this wins: it is rare (strong differentiator), judges actively favor it, CFD explains why your physical results came out the way they did, and together they demonstrate mastery of both theory and real-world application.

PART TWO: Science Project Success

Clark Roberts, PhD — Neuroscience, Psychiatry, Computational Modeling Research experience at MIT, Cambridge, and NYU

Overview of Clark’s Topics

Clark Roberts covers five areas that together describe what makes a science project genuinely good — not just impressive-looking:

  1. Topic selection and getting started
  2. Confirmatory vs. exploratory research
  3. Machine learning and data science
  4. Understanding data and visualization
  5. The ideal end goal for any science project

Note: Dr. Clark’s part of webinar recording will be published here shortly.

What Makes a Great Science Fair Project?

Four foundations every strong project shares:

Clear Question — Specific and testable, with a defined independent variable, dependent variable, and measurable outcome.

Originality — You are asking something not fully answered, or answering a known question in a genuinely new way.

Scientific Method — Controls, repeated trials, honest data collection. Judges with research backgrounds notice immediately when methodology is sloppy.

Data Quality — Not just whether your hypothesis was confirmed, but whether the data is trustworthy and honestly interpreted.

Selecting Your Topic

Choose your domain based on genuine interest — it drives the depth of understanding that separates good projects from memorable ones.

Review the literature — read recent research papers, not textbooks. Focus on three sections: Methods (what they did and whether you could extend it), Future Directions (open questions handed to you directly by the researchers), and Limitations (every limitation is a potential research question). Discussing Future Directions from real papers is what senior scientists and college admissions readers want to hear — it shows genuine scientific literacy.

Follow your passion and what you are good at. The best topics sit at the intersection of genuine curiosity and a real gap in existing knowledge.

Confirmatory vs. Exploratory Research

Understanding this distinction will shape the design of your entire project.

Confirmatory Research

Starts with a specific hypothesis and tests it rigorously. Six steps: generate a clear testable hypothesis, design a quality experiment, establish controls, collect data carefully, repeat trials, and present evidence honestly.

On null results: Do not be afraid of disconfirming results. The large majority of professional science produces null results — they are legitimate contributions, and clearly better than fabricating or cherry-picking data.

On methods: Methods can be more impressive to judges than results if done exceptionally well. A beautifully controlled experiment with careful data collection is a strong project even with a null outcome.

Four mistakes to avoid:

  1. Choosing a project too broad or vague to be testable
  2. Not repeating trials or recording data properly
  3. Waiting until the last minute — science always takes longer than expected
  4. Not understanding or caring about your own project — judges will ask questions

Exploratory Research

Uses existing datasets to discover patterns and statistical structure without a predetermined hypothesis. More software and statistics oriented. Powerful when you have access to a rich dataset. Machine learning fits naturally here for prediction and classification tasks.

Both approaches are scientifically valid and competitive at science fairs — the right choice depends on your question and available data.

Machine Learning and Data Science

Many students want to use machine learning. Clark addresses this with real nuance.

What ML can do: prediction, classification, and finding patterns in high-dimensional data that humans cannot see by inspection.

What ML cannot do — current limitations:

Data challenges: Most ML needs large datasets to generalize. Small datasets produce models that memorize rather than learn. Data quality matters enormously — Garbage In, Garbage Out. Missing important variables means no model can compensate.

Interpretability challenges: High-performing models do not explain why they make predictions. Performance metrics can look good while the model is wrong in ways that matter.

The core message: ML is not a substitute for understanding your data and your domain. A student who runs an ML pipeline without understanding the results will be transparent to any experienced judge.

Understand the Data and Basic Statistics First

Clark’s most emphatic point — applies to every type of project.

You cannot evaluate or know what you have built without understanding your data. Without understanding data distributions and statistical assumptions, you cannot know whether results are real or artifacts. Without understanding your variables, you cannot preprocess them correctly. And the most complex model is not always the best — a logistic regression you understand fully often outperforms a neural network you cannot explain.

Start with the basics: plot your data, compute summary statistics, look at distributions, understand what each variable actually measures. This is the foundation, not a step to rush past.

Understand Causal Structure

Before modeling, understand the causal structure of your system — what causes what, which variables are drivers, which are mediators, and which are noise. This does not require advanced methods. It requires thinking carefully and being honest about what your data can and cannot show. A project with genuine causal reasoning is significantly more sophisticated than one that only reports correlations.

Visualizing Data: Make It Tell a Story

Visualization is both an exploration tool and a communication tool. Use it to identify patterns, spot outliers, understand distributions before applying statistical tests, and generate hypotheses. Zoom in and zoom out — patterns invisible at one resolution often become clear at another.

A real example: Clark’s research on the ABCD Cohort — approximately 22,876 children aged 9–14, tracked over 4 years across genetics, physiology, neuroscience, psychology, and social environment. Ensemble ML regression identified which variable categories best predicted outcomes like ADHD, reading performance, and educational achievement. The lesson: large, messy real data requires careful preprocessing; ML can reveal structure, not just predict; and findings need to be presented at the right level — specific enough to be informative, clear enough to be understood.

The Ideal End Goal for Your Science Project

Let the Titles, Flow, and Results Speak for Themselves

“Understand your topic, but: Let the Titles/Flow/Results Speak for itself”

“Choose Wisely: Informative > Simplicity > Overload”

A judge scanning your poster should understand what you found without reading every word. Each section should lead naturally to the next. Your key finding should be visible immediately — not buried in text.

An overloaded presentation signals that the student has not yet achieved the understanding needed to know what matters. A clear, focused presentation signals the opposite.

On Using LLMs

Clark is unambiguous: avoid relying on or finalizing anything with large language models unless you understand the field well enough to guide them, can audit their outputs, and understand their limitations. It is completely clear to experienced judges when a student has used LLMs aimlessly versus carefully. The project, the understanding, and the interpretation must be yours.

What Judges Are Actually Looking For

  • Innovation and creativity — something genuinely new, or an old question in a new way
  • A clear and meaningful problem worth asking
  • A well-designed experiment with proper controls and reproducible methods
  • Quality data, appropriate analysis, and honest visual presentation
  • Enthusiastic understanding — do you genuinely own this project?
  • A natural, logical flow through the full presentation
  • Less is more on the poster — knowing what to leave off is a sign of understanding
  • Knowing what is not on the poster — judges ask questions; the best students speak fluently beyond what is on the board

Final Thoughts

Webinar 301 gives you two complementary skill sets.

From Dr. Karthik S: a professional-grade framework for building engineering projects using CFD — mesh, simulation setup, four iterative phases, and a home wind tunnel plus computational simulation strategy that immediately differentiates a student project.

From Clark Roberts: a clear-eyed understanding of how real science works — choosing a topic purposefully, picking the right research type, using data and ML honestly, visualizing findings effectively, and communicating in a way that respects your audience without overwhelming them.

A project designed with these principles will not just look good. It will be good — and experienced judges will know the difference.

Frequently Asked Questions

Q: Do I need professional software to do CFD? No. Several capable CFD tools are free or have free student versions. The principles in this webinar apply regardless of which software you use.

Q: What is the difference between confirmatory and exploratory research? Confirmatory tests a specific pre-stated hypothesis. Exploratory uses data to discover patterns without a predetermined hypothesis. Both are valid and competitive at science fairs.

Q: Can I use machine learning in my science fair project? Yes — but only if you understand what the model is doing, why the results make sense, and what the limitations are. A logistic regression you understand is more impressive than a neural network you cannot explain.

Q: What should go on my science fair poster? Less than you think. The key finding, core method, most important results, and your interpretation. Choosing wisely signals that you know what matters.

Q: Does it matter if my hypothesis was wrong? No. Null results are legitimate scientific contributions. A well-designed experiment that disproves a hypothesis is better science than a poorly designed one that confirms it.

Leave a comment