Accelerated FEA self-study across three concurrent ME courses. Every ANSYS simulation validated against an analytic hand calculation before it counted.
Placeholder. Three concurrent ME courses leaned on the same FEA toolchain, each from a different angle: stress and stiffness, dynamics, heat transfer. Doing the homework was straightforward. Trusting the answer meant working out the analytic solution first and treating the simulation as a check.
Goal: build the muscle to predict, within a factor, what a sim should report — and to know when the sim is wrong.
Placeholder. Each problem ran the same loop: derive the closed-form (or quasi-closed-form) result, predict the answer, then build the ANSYS model and run a mesh-convergence study until the answer settled. Disagreements got a post-mortem.
┌──────────────┐ ┌──────────────┐ ┌────────────────┐ ┌────────────┐ │ analytic │───▶│ ansys sim │───▶│ mesh study │───▶│ postmortem│ │ hand-calc │ │ workbench │ │ conv plot │ │ if Δ >1% │ └──────────────┘ └──────────────┘ └────────────────┘ └────────────┘
Placeholder. No simulation result was treated as authoritative without a hand calculation that bracketed it. Slower, but the workflow stops generating colorful nonsense.
Placeholder. Every problem got its own convergence study, even when reusing geometry. A converged mesh for static stress is not a converged mesh for vibration modes.
Placeholder. Notebooks captured the analytic derivation, the input file, the convergence plot, and a written post-mortem. Reproducible and skimmable months later.
Placeholder. The repository is archived but the workflow stuck. New simulations — at work, at school, anywhere — still get an analytic check before the result is trusted.