Reinforcement learning (RL) and multi-agent reinforcement learning (MARL) are disciplines concerned with deﬁning automatically the behaviour of an agent, or a set of interacting agents, by means of reward mechanisms coming from the environment. An important research issue in the context of RL and MARL is the deﬁnition of approaches to combine the knowledge of multiple learning agents to improve the overall performance of the multi-agent system (MAS). This paper illustrates how to improve RL and MARL algorithms by utilizing results from multi-linear algebra such as tensors and tensor factorizations. In particular, the focus is on showing how to modify existing algorithms from literature to include a tensor factorization step applied to the Q-Tables learned by the individual agents to generalize the knowledge about the actions performed in the environment. The modiﬁed algorithms are then evaluated in three RL and MARL scenarios against their unmodiﬁed version to show the beneﬁts of the tensor factorization step.
Keywords: Software Agents, Learning Systems, Algorithms, Dynamic Programming