Understanding how people make decisions is crucial for everything from business strategies to public health policies, but predicting individual choices has long been a challenge. A new study introduces PsychFM, a method that merges psychological theories with machine learning to forecast personal decision-making with remarkable precision. This approach could help create more reliable recommendation systems and reduce risks in areas like finance and consumer behavior, where guessing wrong can have costly consequences.
The key finding is that PsychFM, a hybrid model using factorization machines and psychological features, outperforms existing methods in predicting human choices. It was tested on the CPC-18 dataset, which includes data from 240 participants making repeated decisions on gambles with varying rewards and probabilities. PsychFM achieved a mean squared error of 0.0763, meaning its predictions of choice probabilities were off by only about 27% on average—a significant improvement over models like random forests and standard factorization machines. This means it can more accurately estimate whether a person will pick one option over another in risky scenarios.
The methodology combines two types of data: one-hot encoded vectors that represent user and gamble identities, and psychological features derived from theories like prospect theory. For example, features include differences in expected values between gambles and indicators of dominance or regret minimization. The model uses factorization machines, which handle sparse data well by capturing interactions between features in a low-dimensional space. Blending this with ridge regression—a technique that averages predictions to reduce errors—further enhanced accuracy. The researchers split the dataset, using most for training and 10% for validation, ensuring the model's stability across different data subsets.
Results show that PsychFM, when blended with ridge regression, achieved the lowest error rates. As illustrated in the paper's tables, this hybrid model had test and validation errors close together (e.g., 7.63 and 7.42 for MSE*100), indicating it is stable and not overfitting. In contrast, models like SVM and lasso regression showed larger variations, making them less reliable. The blending process revealed that factorization machines contributed about 53% to the prediction accuracy, while ridge regression added 47%, highlighting the importance of combining user history with gamble details for precise forecasts.
In practical terms, this matters because it allows for personalized predictions in real-world applications. For instance, companies could use such models to tailor product recommendations or financial advice based on individual risk preferences, rather than relying on aggregate data that might miss personal nuances. In policy-making, it could help design interventions that account for how people actually behave under uncertainty, potentially improving outcomes in healthcare or economics. The paper notes that cognitive models alone struggle with external factors like economic conditions, but machine learning adapts to data variations, making hybrids more robust.
Limitations include the model's dependence on specific datasets like CPC-18, which may not capture all decision contexts. The study also points out that adding more features to the blend did not always improve accuracy and could increase complexity. Future work should explore how these methods perform in diverse environments, such as different cultural or economic settings, to ensure broader applicability.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn