ACT-R (pronounced /ˌækt ˈɑr/; short for "Adaptive Control of Thought—Rational") is a cognitive architecture mainly developed by John Robert Anderson and Christian Lebiere at Carnegie Mellon University. Like any cognitive architecture, ACT-R aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform should consist of a series of these discrete operations.

ACT-R
Original author(s)John Robert Anderson
Stable release
7.21.6-<3099:2020-12-21> / December 21, 2020; 3 years ago (2020-12-21)[1]
Written inCommon Lisp
TypeCognitive architecture
LicenseGNU LGPL v2.1
Websiteact-r.psy.cmu.edu

Most of the ACT-R's basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to produce cognition.

Inspiration

edit

ACT-R has been inspired by the work of Allen Newell, and especially by his lifelong championing the idea of unified theories as the only way to truly uncover the underpinnings of cognition.[2] In fact, Anderson usually credits Newell as the major source of influence over his own theory.

What ACT-R looks like

edit

Like other influential cognitive architectures (including Soar, CLARION, and EPIC), the ACT-R theory has a computational implementation as an interpreter of a special coding language. The interpreter itself is written in Common Lisp, and might be loaded into any of the Common Lisp language distributions.

This means that any researcher may download the ACT-R code from the ACT-R website, load it into a Common Lisp distribution, and gain full access to the theory in the form of the ACT-R interpreter.

Also, this enables researchers to specify models of human cognition in the form of a script in the ACT-R language. The language primitives and data-types are designed to reflect the theoretical assumptions about human cognition. These assumptions are based on numerous facts derived from experiments in cognitive psychology and brain imaging.

Like a programming language, ACT-R is a framework: for different tasks (e.g., Tower of Hanoi, memory for text or for list of words, language comprehension, communication, aircraft controlling), researchers create "models" (i.e., programs) in ACT-R. These models reflect the modelers' assumptions about the task within the ACT-R view of cognition. The model might then be run.

Running a model automatically produces a step-by-step simulation of human behavior which specifies each individual cognitive operation (i.e., memory encoding and retrieval, visual and auditory encoding, motor programming and execution, mental imagery manipulation). Each step is associated with quantitative predictions of latencies and accuracies. The model can be tested by comparing its results with the data collected in behavioral experiments.

In recent years, ACT-R has also been extended to make quantitative predictions of patterns of activation in the brain, as detected in experiments with fMRI. In particular, ACT-R has been augmented to predict the shape and time-course of the BOLD response of several brain areas, including the hand and mouth areas in the motor cortex, the left prefrontal cortex, the anterior cingulate cortex, and the basal ganglia.

Brief outline

edit

ACT-R's most important assumption is that human knowledge can be divided into two irreducible kinds of representations: declarative and procedural.

Within the ACT-R code, declarative knowledge is represented in the form of chunks, i.e. vector representations of individual properties, each of them accessible from a labelled slot.

Chunks are held and made accessible through buffers, which are the front-end of what are modules, i.e. specialized and largely independent brain structures.

There are two types of modules:

  • Perceptual-motor modules, which take care of the interface with the real world (i.e., with a simulation of the real world). The most well-developed perceptual-motor modules in ACT-R are the visual and the manual modules.
  • Memory modules. There are two kinds of memory modules in ACT-R:
    • Declarative memory, consisting of facts such as Washington, D.C. is the capital of United States, France is a country in Europe, or 2+3=5
    • Procedural memory, made of productions. Productions represent knowledge about how we do things: for instance, knowledge about how to type the letter "Q" on a keyboard, about how to drive, or about how to perform addition.

All the modules can only be accessed through their buffers. The contents of the buffers at a given moment in time represent the state of ACT-R at that moment. The only exception to this rule is the procedural module, which stores and applies procedural knowledge. It does not have an accessible buffer and is actually used to access other modules' contents.

Procedural knowledge is represented in form of productions. The term "production" reflects the actual implementation of ACT-R as a production system, but, in fact, a production is mainly a formal notation to specify the information flow from cortical areas (i.e. the buffers) to the basal ganglia, and back to the cortex.

At each moment, an internal pattern matcher searches for a production that matches the current state of the buffers. Only one such production can be executed at a given moment. That production, when executed, can modify the buffers and thus change the state of the system. Thus, in ACT-R, cognition unfolds as a succession of production firings.

The symbolic vs. connectionist debate

edit

In the cognitive sciences, different theories are usually ascribed to either the "symbolic" or the "connectionist" approach to cognition. ACT-R clearly belongs to the "symbolic" field and is classified as such in standard textbooks and collections.[3] Its entities (chunks and productions) are discrete and its operations are syntactical, that is, not referring to the semantic content of the representations but only to their properties that deem them appropriate to participate in the computation(s). This is seen clearly in the chunk slots and in the properties of buffer matching in productions, both of which function as standard symbolic variables.

Members of the ACT-R community, including its developers, prefer to think of ACT-R as a general framework that specifies how the brain is organized, and how its organization gives birth to what is perceived (and, in cognitive psychology, investigated) as mind, going beyond the traditional symbolic/connectionist debate. None of this, naturally, argues against the classification of ACT-R as symbolic system, because all symbolic approaches to cognition aim to describe the mind, as a product of brain function, using a certain class of entities and systems to achieve that goal.

A common misunderstanding suggests that ACT-R may not be a symbolic system because it attempts to characterize brain function. This is incorrect on two counts: First, all approaches to computational modeling of cognition, symbolic or otherwise, must in some respect characterize brain function, because the mind is brain function. And second, all such approaches, including connectionist approaches, attempt to characterize the mind at a cognitive level of description and not at the neural level, because it is only at the cognitive level that important generalizations can be retained.[4]

Further misunderstandings arise because of the associative character of certain ACT-R properties, such as chunks spreading activation to each other, or chunks and productions carrying quantitative properties relevant to their selection. None of these properties counter the fundamental nature of these entities as symbolic, regardless of their role in unit selection and, ultimately, in computation.

Theory vs. implementation, and Vanilla ACT-R

edit

The importance of distinguishing between the theory itself and its implementation is usually highlighted by ACT-R developers.

In fact, much of the implementation does not reflect the theory. For instance, the actual implementation makes use of additional 'modules' that exist only for purely computational reasons, and are not supposed to reflect anything in the brain (e.g., one computational module contains the pseudo-random number generator used to produce noisy parameters, while another holds naming routines for generating data structures accessible through variable names).

Also, the actual implementation is designed to enable researchers to modify the theory, e.g. by altering the standard parameters, or creating new modules, or partially modifying the behavior of the existing ones.

Finally, while Anderson's laboratory at CMU maintains and releases the official ACT-R code, other alternative implementations of the theory have been made available. These alternative implementations include jACT-R [5] (written in Java by Anthony M. Harrison at the Naval Research Laboratory) and Python ACT-R (written in Python by Terrence C. Stewart and Robert L. West at Carleton University, Canada).[6]

Similarly, ACT-RN (now discontinued) was a full-fledged neural implementation of the 1993 version of the theory.[7] All of these versions were fully functional, and models have been written and run with all of them.

Because of these implementational degrees of freedom, the ACT-R community usually refers to the "official", Lisp-based, version of the theory, when adopted in its original form and left unmodified, as "Vanilla ACT-R".

Applications

edit

Over the years, ACT-R models have been used in more than 700 different scientific publications, and have been cited in many more.[8]

Memory, attention, and executive control

edit

The ACT-R declarative memory system has been used to model human memory since its inception. In the course of years, it has been adopted to successfully model a large number of known effects. They include the fan effect of interference for associated information,[9] primacy and recency effects for list memory,[10] and serial recall.[11]

ACT-R has been used to model attentive and control processes in a number of cognitive paradigms. These include the Stroop task,[12][13] task switching,[14][15] the psychological refractory period,[16] and multi-tasking.[17]

Natural language

edit

A number of researchers have been using ACT-R to model several aspects of natural language understanding and production. They include models of syntactic parsing,[18] language understanding,[19] language acquisition [20] and metaphor comprehension.[21]

Complex tasks

edit

ACT-R has been used to capture how humans solve complex problems like the Tower of Hanoi,[22] or how people solve algebraic equations.[23] It has also been used to model human behavior in driving and flying.[24]

With the integration of perceptual-motor capabilities, ACT-R has become increasingly popular as a modeling tool in human factors and human-computer interaction. In this domain, it has been adopted to model driving behavior under different conditions,[25][26] menu selection and visual search on computer application,[27][28] and web navigation.[29]

Cognitive neuroscience

edit

More recently, ACT-R has been used to predict patterns of brain activation during imaging experiments.[30] In this field, ACT-R models have been successfully used to predict prefrontal and parietal activity in memory retrieval,[31] anterior cingulate activity for control operations,[32] and practice-related changes in brain activity.[33]

Education

edit

ACT-R has been often adopted as the foundation for cognitive tutors.[34][35] These systems use an internal ACT-R model to mimic the behavior of a student and personalize his/her instructions and curriculum, trying to "guess" the difficulties that students may have and provide focused help.

Such "Cognitive Tutors" are being used as a platform for research on learning and cognitive modeling as part of the Pittsburgh Science of Learning Center. Some of the most successful applications, like the Cognitive Tutor for Mathematics, are used in thousands of schools across the United States.

Brief history

edit

Early years: 1973–1990

edit

ACT-R is the ultimate successor of a series of increasingly precise models of human cognition developed by John R. Anderson.

Its roots can be backtraced to the original HAM (Human Associative Memory) model of memory, described by John R. Anderson and Gordon Bower in 1973.[36] The HAM model was later expanded into the first version of the ACT theory.[37] This was the first time the procedural memory was added to the original declarative memory system, introducing a computational dichotomy that was later proved to hold in human brain.[38] The theory was then further extended into the ACT* model of human cognition.[39]

Integration with rational analysis: 1990–1998

edit

In the late eighties, Anderson devoted himself to exploring and outlining a mathematical approach to cognition that he named Rational analysis.[40] The basic assumption of Rational Analysis is that cognition is optimally adaptive, and precise estimates of cognitive functions mirror statistical properties of the environment.[41] Later on, he came back to the development of the ACT theory, using the Rational Analysis as a unifying framework for the underlying calculations. To highlight the importance of the new approach in the shaping of the architecture, its name was modified to ACT-R, with the "R" standing for "Rational" [42]

In 1993, Anderson met with Christian Lebiere, a researcher in connectionist models mostly famous for developing with Scott Fahlman the Cascade Correlation learning algorithm. Their joint work culminated in the release of ACT-R 4.0.[43] Thanks to Mike Byrne (now at Rice University), version 4.0 also included optional perceptual and motor capabilities, mostly inspired from the EPIC architecture, which greatly expanded the possible applications of the theory.

Brain Imaging and Modular Structure: 1998–2015

edit

After the release of ACT-R 4.0, John Anderson became more and more interested in the underlying neural plausibility of his life-time theory, and began to use brain imaging techniques pursuing his own goal of understanding the computational underpinnings of the human mind.

The necessity of accounting for brain localization pushed for a major revision of the theory. ACT-R 5.0 introduced the concept of modules, specialized sets of procedural and declarative representations that could be mapped to known brain systems.[44] In addition, the interaction between procedural and declarative knowledge was mediated by newly introduced buffers, specialized structures for holding temporarily active information (see the section above). Buffers were thought to reflect cortical activity, and a subsequent series of studies later confirmed that activations in cortical regions could be successfully related to computational operations over buffers.

A new version of the code, completely rewritten, was presented in 2005 as ACT-R 6.0. It also included significant improvements in the ACT-R coding language. This included a new mechanism in ACT-R production specification called dynamic pattern matching.  Unlike previous versions which required the pattern matched by a production to include specific slots for the information in the buffers, dynamic pattern matching allows the slots to be matched to also be specified by the buffer contents. A description and motivation for the ACT-R 6.0 is given in Anderson (2007).[45]

ACT-R 7.0: 2015-Present

edit

At the 2015 workshop, it was argued that software changes required an increment in the model numbering to ACT-R 7.0. A major software change was removal of the requirement that chunks must be specified based on predefined chunk-types.  The chunk-type mechanism was not removed, but changed from being a required construct of the architecture to being an optional syntactic mechanism in the software.  This allowed for more flexibility in knowledge representation for modeling tasks that require learning novel information and extended the functionality provided through dynamic pattern matching now allowing models to create new "types" of chunks.  This also lead to a simplification of the syntax required for specifying the actions in a production because all the actions now have the same syntactic form.  The ACT-R software has also been subsequently updated to include a remote interface based on JSON RPC 1.0.  That interface was added to make it easier to build tasks for models and work with ACT-R from languages other than Lisp, and the tutorial included with the software has been updated to provide Python implementations for all of the example tasks performed by the tutorial models.

Workshop and summer school

edit

In 1995, Carnegie Mellon University began hosting their Annual ACT-R Workshop and Summer School.[46] Their ACT-R Workshop is currently hosted at the annual MathPsych/ICCM Conference, and their Summer School is hosted on-campus with a virtual attendance option at Carnegie Mellon University.

Spin-offs

edit

The long development of the ACT-R theory gave birth to a certain number of parallel and related projects.

The most important ones are the PUPS production system, an initial implementation of Anderson's theory, later abandoned; and ACT-RN,[7] a neural network implementation of the theory developed by Christian Lebiere.

Lynne M. Reder, also at Carnegie Mellon University, developed SAC in the early 1990s, a model of conceptual and perceptual aspects of memory that shares many features with the ACT-R core declarative system, although differing in some assumptions.

For his dissertation at Carnegie Mellon University, Christopher L. Dancy developed, and successfully defended in 2014, ACT-R/Phi,[47] an implementation of ACT-R with added physiological modules which enable ACT-R to interface with human physiological processes.

A lightweight Python-based implementation of the working memory component of ACT-R, pyACTUp,[48] was created by Don Morrison at Carnegie Mellon University, who maintains the ACT-R codebase. This library implements ACT-R as a unimodal supervised learning model for classification tasks.

Notes

edit
  1. ^ "ACT-R » Software". ACT-R.psy.cmu.edu. Retrieved 2021-03-24.
  2. ^ Newell, Allen (1994). Unified Theories of Cognition. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-92101-1.
  3. ^ Polk, T. A.; C. M. Seifert (2002). Cognitive Modeling. Cambridge, Massachusetts: MIT Press. ISBN 0-262-66116-0.
  4. ^ Pylyshyn, Z. W. (1984). Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, Massachusetts: MIT Press. ISBN 0-262-66058-X.
  5. ^ Harrison, A. (2002). jACT-R: Java ACT-R. Proceedings of the 8th Annual ACT-R Workshop PDF Archived September 7, 2008, at the Wayback Machine
  6. ^ Stewart, T. C. and West, R. L. (2006) Deconstructing ACT-R. Proceedings of the seventh international conference on cognitive modeling PDF
  7. ^ a b Lebiere, C., & Anderson, J. R. (1993). A connectionist Implementation of the ACT-R production system. In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society (pp. 635–640). Mahwah, NJ: Lawrence Erlbaum Associates
  8. ^ [1] ACT-R Publications and Published Models - CMU
  9. ^ Anderson, J. R. & Reder, L. M. (1999). The fan effect: New results and new theories. Journal of Experimental Psychology: General, 128, 186–197.
  10. ^ Anderson, J. R., Bothell, D., Lebiere, C. & Matessa, M. (1998). An integrated theory of list memory. Journal of Memory and Language, 38, 341–380.
  11. ^ Anderson, J. R. & Matessa, M. P. (1997). A production system theory of serial memory. Psychological Review, 104, 728–748.
  12. ^ Lovett, M. C. (2005) A strategy-based interpretation of Stroop. Cognitive Science, 29, 493–524.
  13. ^ Juvina, I., & Taatgen, N. A. (2009). A repetition-suppression account of between-trial effects in a modified Stroop paradigm. Acta Psychologica, 131(1), 72–84.
  14. ^ Altmann, E. M., & Gray, W. D. (2008). An integrated model of cognitive control in task switching. Psychological Review, 115, 602–639.
  15. ^ Sohn, M.-H., & Anderson, J. R. (2001). Task preparation and task repetition: Two-component model of task switching. Journal of Experimental Psychology: General.
  16. ^ Byrne, M. D., & Anderson, J. R. (2001). Serial modules in parallel: The psychological refractory period and perfect time-sharing. Psychological Review, 108, 847–869.
  17. ^ Salvucci, D. D., & Taatgen, N. A. (2008). Threaded cognition: An integrated theory of concurrent multitasking. Psychological Review, 130(1), 101–130.
  18. ^ Lewis, R. L. & Vasishth, S. (2005). An activation-based model of sentence processing as skilled memory retrieval. Cognitive Science, 29, 375–419
  19. ^ Budiu, R. & Anderson, J. R. (2004). Interpretation-Based Processing: A Unified Theory of Semantic Sentence Processing. Cognitive Science, 28, 1–44.
  20. ^ Taatgen, N.A. & Anderson, J.R. (2002). Why do children learn to say "broke"? A model of learning the past tense without feedback. Cognition, 86(2), 123–155.
  21. ^ Budiu R., & Anderson J. R. (2002). Comprehending anaphoric metaphors. Memory & Cognition, 30, 158–165.
  22. ^ Altmann, E. M. & Trafton, J. G. (2002). Memory for goals: An activation-based model. Cognitive Science, 26, 39–83.
  23. ^ Anderson, J. R. (2005) Human symbol manipulation within an integrated cognitive architecture. Cognitive Science, 29(3), 313–341.
  24. ^ Byrne, M. D., & Kirlik, A. (2005). Using computational cognitive modeling to diagnose possible sources of aviation error. International Journal of Aviation Psychology, 15, 135–155. doi:10.1207/s15327108ijap1502_2
  25. ^ Salvucci, D. D. (2006). Modeling driver behavior in a cognitive architecture. Human Factors, 48, 362–380.
  26. ^ Salvucci, D. D., & Macuga, K. L. (2001). Predicting the effects of cellular-phone dialing on driver performance. In Proceedings of the Fourth International Conference on Cognitive Modeling, pp. 25–32. Mahwah, NJ: Lawrence Erlbaum Associates.
  27. ^ Byrne, M. D., (2001). ACT-R/PM and menu selection: Applying a cognitive architecture to HCI. International Journal of Human-Computer Studies, 55, 41–84.
  28. ^ Fleetwood, M. D. & Byrne, M. D. (2002) Modeling icon search in ACT-R/PM. Cognitive Systems Research, 3, 25–33.
  29. ^ Fu, Wai-Tat; Pirolli, Peter (2007). "SNIF-ACT: A cognitive model of user navigation on the World Wide Web" (PDF). Human-Computer Interaction. 22 (4): 355–412. Archived from the original (PDF) on 2010-08-02.
  30. ^ Anderson, J.R., Fincham, J. M., Qin, Y., & Stocco, A. (2008). A central circuit of the mind. Trends in Cognitive Sciences, 12(4), 136–143
  31. ^ Sohn, M.-H., Goode, A., Stenger, V. A, Carter, C. S., & Anderson, J. R. (2003). Competition and representation during memory retrieval: Roles of the prefrontal cortex and the posterior parietal cortex, Proceedings of the National Academy of Sciences, 100, 7412–7417.
  32. ^ Sohn, M.-H., Albert, M. V., Stenger, V. A, Jung, K.-J., Carter, C. S., & Anderson, J. R. (2007). Anticipation of conflict monitoring in the anterior cingulate cortex and the prefrontal cortex. Proceedings of National Academy of Science, 104, 10330–10334.
  33. ^ Qin, Y., Sohn, M-H, Anderson, J. R., Stenger, V. A., Fissell, K., Goode, A. Carter, C. S. (2003). Predicting the practice effects on the blood oxygenation level-dependent (BOLD) function of fMRI in a symbolic manipulation task. Proceedings of the National Academy of Sciences of the United States of America. 100(8): 4951–4956.
  34. ^ Lewis, M. W., Milson, R., & Anderson, J. R. (1987). The teacher's apprentice: Designing an intelligent authoring system for high school mathematics. In G. P. Kearsley (Ed.), Artificial Intelligence and Instruction. Reading, MA: Addison-Wesley. ISBN 0-201-11654-5.
  35. ^ Anderson, J. R. & Gluck, K. (2001). What role do cognitive architectures play in intelligent tutoring systems? In D. Klahr & S. M. Carver (Eds.) Cognition & Instruction: Twenty-five years of progress, 227–262. Lawrence Erlbaum Associates. ISBN 0-8058-3824-4.
  36. ^ Anderson, J. R., & Bower, G. H. (1973). Human associative memory. Washington, DC: Winston and Sons.
  37. ^ Anderson, J. R. (1976) Language, memory, and thought. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0-89859-107-4.
  38. ^ Cohen, N. J., & Squire, L. R. (1980). Preserved learning and retention of pattern-analyzing skill in amnesia: dissociation of knowing how and knowing that. Science, 210(4466), 207–210
  39. ^ Anderson, J. R. (1983). The architecture of cognition. Cambridge, Massachusetts: Harvard University Press. ISBN 0-8058-2233-X.
  40. ^ Anderson, J. R. (1990) The adaptive character of thought. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0-8058-0419-6.
  41. ^ Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2, 396–408.
  42. ^ Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates. ISBN 0-8058-1199-0.
  43. ^ Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Hillsdale, NJ: Lawrence Erlbaum Associates. ISBN 0-8058-2817-6.
  44. ^ Anderson, J. R., et al. (2004) An integrated theory of the mind. Psychological Review, 111(4). 1036–1060
  45. ^ Anderson, J. R. (2007). How can the human mind occur in the physical universe? New York, NY: Oxford University Press. ISBN 0-19-532425-0.
  46. ^ "ACT-R » Workshops".
  47. ^ Dancy, C. L., Ritter, F. E., & Berry, K. (2012). Towards adding a physiological substrate to ACT-R. In 21st Annual Conference on Behavior Representation in Modeling and Simulation 2012, BRiMS 2012 (pp. 75-82). (21st Annual Conference on Behavior Representation in Modeling and Simulation 2012, BRiMS 2012).
  48. ^ Morrison, Don. "pyactup". github.com. Retrieved 15 September 2023.

References

edit
  • Anderson, J. R. (2007). How can the human mind occur in the physical universe? New York, NY: Oxford University Press. ISBN 0-19-532425-0.
  • Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y . (2004). An integrated theory of the mind. Psychological Review, 1036–1060.
edit