This T32 Spring 2026 seminar covers AI-assisted dietary assessment and intervention pipelines using food photos, continuous glucose monitoring, wristband wearables, counterfactual AI, and LLM-RAG meal recommendations.
MOIRAI-MOE emphasizes that feature specific specialization in timeseries foundation model cannot help generalize input signals and is an hindrance in zero-shot prompting. MOIRAI-MOE underscores that processing mut be done at token level inside the transformer modules with a mixture of experts (FFNs) and a gating algorithm to decide which togen goes to which expert.
Chronos 2 is the first ever multivariate time-series forecasting foundation model. It leverages time attention (across time axis) and group attention (across different signals within group) to predict quantiles instead of point estimates.
Unicast is a timeseries forecasting foundation model. It leverages multi-modal data (visual and textual information) from a timeseries through embedding generators and aligns the output of the embedders to forecasting task using parameter efficient soft prompting.
TimesFM is a decoder-only foundation model trained in a supervised way. It has a very high out-of-the-box zero-shot performance and can be used with any context size.
This paper runs a 48-week long clinical trial with Type 2 diabetic patients to assess the effectiveness of a digital health platform .
LEAD is an explainable AI approach that perturbs synthetic critical samples to generate consistent, sparse and robust explanations for disease classification.
This paper is a search based but fast counterfactual generation method for textual data. It proposes three operators to conduct the search in an anytime algorithm. Therefore, the more time it gets the better qulaity countefactual it delivers.
This paper introduces new constraints in the optimization method to generate realistic and feasible counterfactual explanations.
Since evaluation of counterfactual explanations is a difficult task, his paper tries to automate the process by fine-tuning large language models (LLMs) on human ratings for counterfactuals.