# An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension

## Authors

- First Online:

- Received:
- Revised:

DOI: 10.1007/s00285-009-0306-3

- Cite this article as:
- Terekhov, A.V., Pesin, Y.B., Niu, X. et al. J. Math. Biol. (2010) 61: 423. doi:10.1007/s00285-009-0306-3

- 24 Citations
- 249 Views

## Abstract

We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms.

### Keywords

Inverse optimizationHuman prehensionUniqueness theoremPrincipal component analysis### List of symbols

*x*An independent variable

*J*An objective function of an optimization problem

- \({\mathcal C}\)
Constraints for an optimization problem

- \({\left\langle J,\mathcal C\right\rangle}\)
An optimization problem with the objective function

*J*and the constrains \({\mathcal{C}}\)*f*_{i}(·),*g*_{i}(·)Scalar functions

*C*,*b*A matrix and a vector of the linear constraints

*Cx*=*b**a*_{i}A scalar value

- \({\mathcal I}\)
A set of indexes