Por una aproximación humanista no reaccionaria a la IA

Autores/as

DOI:

https://doi.org/10.14422/ryf.vol287.i1463.y2023.001

Palabras clave:

filosofía de la inteligencia artificial, filosofía de la mente, humanismo tecnológico, fundamentación semántica, problema difícil de la consciencia

Resumen

Muchas publicaciones están tratando de pinchar la burbuja de expectativas sobre el desarrollo reciente de la IA. Sin embargo, en ciertos círculos humanistas donde un escepticismo conservador con respecto a las novedades va de serie, es conveniente detener la reflexión en el exceso contrario: la complacencia de posturas que se contentan con argumentos débiles y conceptos periclitados ya superados en la literatura científica y filosófica. Estas perspectivas tienden a adoptar una postura reaccionaria ante la IA, aferrándose defensivamente a cualquier argumento que pueda preservar la singularidad humana a costa de renunciar a una cierta honestidad intelectual. Así afirma taxativamente que la IA nunca logrará alcanzarla. No obstante, es posible adoptar posturas humanistas receptivas a los desarrollos de la IA, abiertas a sus retos actuales y capaces de dialogar y refinar sus argumentos a través de algunas claves: dignificar la miserabilidad humana, tender puentes interdisciplinares, y mantener la prudencia, la cortesía y la suspensión del juicio cuando sea preciso.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

Adams, S.; Arel, I., Bach, J.; Coop, R.; Furlan, R.; Goertzel, B.; Hall, J. S.; Samsonovich, A.; Scheutz, M.; Schlesinger, M.; Shapiro, S. C., y Sowa, J. (2012), Mapping the Landscape of Human-Level Artificial General Intelligence, en AI Magazine 33_(1), 25-42. https://doi.org/10.1609/aimag.v33i1.2322

Alregib, G., y Prabhushankar, M. (2022), Explanatory Paradigms in Neural Networks: Towards relevant and contextual explanations, en IEEE Signal Processing Magazine 39, 59-72. https://doi.org/10.1109/MSP.2022.3163871

Appenzeller, T. (2018), Europe’s first artists were Neandertals, en Science, 359_(6378), 852-853. https://doi.org/10.1126/science.359.6378.852

Aubert, M.; Brumm, A., y Huntley, J. (2018), Early dates for ‘Neanderthal cave art’ may be wrong, en Journal of human evolution 125, 215-217. https://doi.org/10.1016/j.jhevol.2018.08.004

Baars, B. J. (2005), Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience, en Progress in brain research 150, 45-53.

Beavan, A.; Domingo-Sananes, M. R., y Mcinerney, J. O. (2024), Contingency, repeatability, and predictability in the evolution of a prokaryotic pangenome, en Proceedings of the National Academy of Sciences 121(1), e2304934120.

Bekoff, M. (2000), Animal Emotions: Exploring Passionate Natures: Current interdisciplinary research provides compelling evidence that many animals experience such emotions as joy, fear, love, despair, and grief—we are not alone, en BioScience 50(10), 861-870.

Bengio, Y.; Lee, D.-H.; Bornschein, J.; Mesnard, T., y Lin, Z. (2016), Towards Biologically Plausible Deep Learning, en ArXiv: 1502.04156v3.

Block, N. (1981), Psychologism and behaviorism, en The Philosophical Review 90_(1), 5-43.

Bonhoeffer, D. (2001), Resistencia y sumisión: cartas desde el cautiverio, Salamanca: Sígueme.

Brakes, P.; Dall, S. R. X.; Aplin, L. M.; Bearhop, S.; Carroll, E. L.; Ciucci, P.; Fishlock, V.; Ford, J. K. B.; Garland, E. C.; Keith, S. A.; Mcgregor, P. K.; Mesnick, S. L.; Noad, M. J.; Notarbartolo Di Sciara, G.; Robbins, M. M.; Simmonds, M. P.; Spina, F.; Thornton, A.; Wade…, y Rutz, C. (2019), Animal cultures matter for conservation, en Science 363_(6431), 1032-1034.

Bridges, A. D.; Royka, A.; Wilson, T.; Lockwood, C.; Richter, J.; Juusola, M., y Chittka, L. (2024), Bumblebees socially learn behaviour too complex to innovate alone, en Nature, 1-7.

Brooks, T.; Depue, W.; Guo, Y.; Holmes, C.; Jing, L.; Luhman, E.; Luhman, T.; Ng, C.; Peebles, B.; Ramesh, A.; Schnurr, D.; Taylor, J., y Wang, R. (2024), Video generation models as world simulators. https://openai.com/research/videogeneration-models-as-world-simulators

Brown, R., Lau, H., y Ledoux, J. E. (2019), Understanding the higher-order approach to consciousness, en Trends in cognitive sciences 23(9), 754-768.

Burkhardt, A. (2022), El otro Wittgenstein o la “embestida contra los límites del lenguaje”, en Claridades: revista de filosofía 14(2), 101-140.

Butlin, P.; Long, R.; Elmoznino, E.; Bengio, Y.; Birch, J.; Constant, A., et al. (2023), Consciousness in Artificial Intelligence: Insights from the Science of Consciousness, en ArXiv [cs.AI]. http://arxiv.org/abs/2308.08708

Campolo, A., y Crawford, K. (2020), Enchanted Determinism: Power Without Responsibility in Artificial Intelligence, en Engaging Science, Technology, and Society 6, 1-19.

Chalmers, D. (1995), Facing up to the problem of consciousness, en Journal of Consciousness Studies 2_ (3), 200-219. https://doi.org/10.1093/acprof:oso/9780195311105.003.0001

Chalmers, D. (2007), The hard problem of consciousness, en The Blackwell companion to consciousness, 225-235.

Chalmers, D. (2013), La mente consciente, Gedisa, Barcelona (ed. orig. 1996).

Chalmers, D. (2018), The meta-problem of consciousness, en Journal of Consciousness Studies 25_(9-10), 6-61.

Cuzzolin, F.; Morelli, A.; Cirstea, B., y Sahakian, B. (2020), Knowing me, knowing you: Theory of mind in AI, en Psychological Medicine 50, 1057-1061. https://doi.org/10.1017/S0033291720000835

Damasio, A. (2012), Y el cerebro creó al hombre, Barcelona: Destino (ed. orig. 2010).

Davis, E., y Marcus, G. (2015), Commonsense reasoning and commonsense knowledge in artificial intelligence, en Communications of the ACM 58_(9), 92-103.

Dehaene, S. (2015), La conciencia en el cerebro, Madrid: Siglo XXI (ed. orig. 2014).

Dennett, D. C. (2013), Intuition pumps and other tools for thinking, WW Norton & Company.

Dennett, D. C. (2018), Facing up to the hard question of consciousness, en Philosophical Transactions of the Royal Society B: Biological Sciences, 373_(1755), 20170342.

Dobzhansky, T. (2013), Nothing in biology makes sense except in the light of evolution, en The american biology teacher 75_(2), 87-91.

Dorobantu, M. (2022), Strong Artificial Intelligence and Theological Anthropology: One Problem, Two Solutions, en Humanism and its Discontents: The Rise of Transhumanism and Posthumanism, Cham: Springer International Publishing, 19-33.

Dreyfus, H. (1972), What Computers Can’t Do, New York: MIT Press.

Fjelland, R. (2020), Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7_(1), 1-9. FRAIJÓ, M. (1998), A vueltas con la religión, Estella.

Frankl, V. E. (2011), El hombre en busca de sentido, Madrid; Herder editorial (e. orig. 1946).

Franks, N. R., y Richardson, T. (2006), Teaching in tandem-running ants, en Nature, 439_(7073), 153-153.

Freud, S. (1968), Una dificultad del psicoanálisis, en Obras completas, Madrid: Editorial Biblioteca Nueva.

Gabrić, P. (2021), Overlooked evidence for semantic compositionality and signal reduction in wild chimpanzees (Pan troglodytes), en Animal Cognition, 1-13.

Gaos, J. (1982), Confesiones Profesionales. Aforística, en Obras Completas, tomo XVII, México: UNAM.

Greco, C. M., y Tagarelli, A. (2023), Bringing order into the realm of Transformer- based language models for artificial intelligence and law, en Artificial Intelligence and Law, 1-148.

Griffiths, T. (2020), Understanding Human Intelligence through Human Limitations. Trends in Cognitive Sciences, 24, 873-883. https://doi.org/10.1016/j.tics.2020.09.00

Guerrero, L. E.; Castillo, L. F.; Arango-López, J., y Moreira, F. (2023), A systematic review of integrated information theory: a perspective from artificial intelligence and the cognitive sciences, en Neural Computing and Applications, 1-33.

Guha, N.; Nyarko, J.; Ho, D. E.; Ré, C.; Chilton, A.; Narayana, A.; Chohlas-Wood, A.; Peters, A.; Waldon, B.; Rockmore, D. N.; Zambrano, D.; Talisman, D.; Hoque, E.; Surani, F.; Fagan, F.; Sarfaty, G.; Dickinson, G. M.; Porat, H.: Hegland, J…, Y Li, Z. (2023), Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models, en ArXiv preprint ArXiv:2308.11462.

Haikonen, P. O. (2020), On artificial intelligence and consciousness, en_Journal of Artificial Intelligence and Consciousness_7_(01), 73-82.

Hameroff, S., y Penrose, R. (1996), Conscious Events as Orchestrated Space- Time Selections, en Journal of Consciousness Studies 3 (1), 36–53. http://www.ingentaconnect.com/content/imp/jcs/1996/00000003/00000001/679

Hameroff, S., y Penrose, R. (2014), Consciousness in the universe: A review of the ‘Orch OR’theory, en Physics of life reviews 11_(1), 39-78.

Harnad, S. (1990), The Symbol Grounding Problem, en Physica D: Nonlinear Phenomena 42_(1-3), 335-346.

Hassabis, D.; Kumaran, D.; Summerfield, C., y Botvinick, M. (2017), Neuroscience- Inspired Artificial Intelligence, en Neuron 95, 245-258.

Hefner, P. (2019), Biocultural evolution and the created co-Creator, en Science and Theology, editado por TED PETERS, Routledge,174-188.

Herzog, D. J., y Herzog, N. (2024), What is it like to be an AI bat?, Qeios.

Hofstadter, D. R. (1999), Gödel, Escher, Bach: An eternal golden braid, Basic books.

Hume, D. (2000), A Treatise of Human Nature, David Fate Norton y Mary J. Norton (eds.), Oxford University Press, (ed. orig. 1739).

Jackson, F. (1998), Epiphenomenal qualia, en Consciousness and emotion in cognitive science, Routledge, 197-206.

Janik, V. M. (2013), Cognitive skills in bottlenose dolphin communication, en Trends in cognitive sciences 17_(4), 157-159.

Kaspar, C.; Ravoo, B.; Wiel, W.; Wegner, S., y Pernice, W. (2021), The rise of intelligent matter, en Nature 594, 345-355. https://doi.org/10.1038/s41586-021-03453-y

Kim, S. Y.; Schmitt, B. H., y Thalmann, N. M. (2019), Eliza in the uncanny valley: Anthropomorphizing consumer robots increases their perceived warmth but decreases liking, en Marketing letters 30, 1-12.

Kirk, R. (2005). Zombies and consciousness, Clarendon Press.

Koch, C.; Massimini, M.; Boly, M., y Tononi, G. (2016), Neural correlates of consciousness: progress and problems, en Nature Reviews Neuroscience 17_(5), 307-321.

Kurzweil, R. (2005), The Singularity is Near: When Humans Transcend Biology, Viking Penguin.

Larson, E. J. (2021), The Myth of Artificial Intelligence: Why computers can’t think the way we do, Harvard University Press.

Lee, J. P.; Jang, H.; Jang, Y.; Song, H.; Lee, S.; Lee, P. S., y Kim, J. (2024), Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface, en Nature Communications 15_(1), 530.

Lenski, R. E. (2023), Revisiting the design of the long-term evolution experiment with Escherichia coli, en Journal of Molecular Evolution, 1-13.

Levine, J. (1983), Materialism and qualia: The explanatory gap, en Pacific Philosophical Quarterly 64, no. 4, 354-361. https://doi.org/10.1111/j.1468-0114.1983.tb00207.x

Lindell, A. (2013), Continuities in Emotion Lateralization in Human and Non-Human Primates, en Frontiers in Human Neuroscience 7. https://doi.org/10.3389/fnhum.2013.00464

Llinás, R. (2003), El cerebro y el mito del yo, Cali: Norma, (ed. orig. 2002).

Locke, J. (1975), Essay Concerning Human Understanding, Oxford: Oxford University Press (ed. orig. 1689).

Lucas, J. (1996), Minds, machines and Gödel: A retrospect, en Artificial intelligence: Critical concepts 3,359-376.

Lumbreras, S. (2017), Strong artificial intelligence and imago hominis: The risks of a reductionist definition of human nature, en Issues in Science and Theology: Are We Special? Human Uniqueness in Science and Theology, 157-168.

Lumbreras, S. (2022), Lessons from the quest for Artificial Consciousness: The emergence criterion, insight-oriented AI, and Imago Dei, en Zygon 57, 963- 983. https://doi.org/10.1111/zygo.12827

Lyre, H. (2020), The state space of artificial intelligence, en Minds and Machines 30(3), 325-347.

Macdonald, K.; Scherjon, F.; Van Veen, E.; Vaesen, K., y Roebroeks, W. (2021), Middle Pleistocene fire use: The first signal of widespread cultural diffusion in human evolution, en Proceedings of the National Academy of Sciences 118(31).

Madrid, C. (2024), Filosofía de la inteligencia artificial, Oviedo: Pentalfa.

Manzotti, R. (2019), Mind-object identity: A solution to the hard problem, en Frontiers in Psychology 10_ (FEB), art. no. 63. https://doi.org/10.3389/fpsyg.2019.00063

Manzotti, R. (2021), The boundaries and location of consciousness as identity theories deem fit [I confini e la localizzazione della coscienza secondo le teorie dell’identità], en Rivista Internazionale di Filosofia e Psicologia 12_(3), 225-241. https://doi.org/10.4453/rifp.2021.0022

Mascaro et al. (2022), Application of insects to wounds of self and others by chimpanzees in the wild, en Current Biology 32_(3), R112-R113.

Mcginn, C. (1989), Can we solve the Mind–Body problem?, en Mind 98_(391), 349-366.

Metzinger, T. (2013), Two principles for robot ethics, en Robotik und gesetzgebung, 247-286.

Mingers, J. (2012), Abduction: The missing link between deduction and induction. A comment on Ormerod’s “rational inference: Deductive, inductive and probabilistic thinking”, en Journal of the Operational Research Society 63, 860-861. https://doi.org/10.1057/jors.2011.85

Mitchell, M. (2019), Artificial intelligence: A guide for thinking humans, Penguin UK.

Mora, F. (2005), El reloj de la sabiduría: tiempos y espacios en el cerebrohumano, Madrid: Alianza.

Mosterín, J. (1998), Vivan los animales. Debate S.A.

Nagel, T. (1980), What is it like to be a bat?, en The Language and Thought Series, Harvard University Press, 159-168.

Nesse, R., y Ellsworth, P. (2009), Evolution, emotions, and emotional disorders, en The American psychologist 64_(2), 129-139. https://doi.org/10.1037/a0013503

Newen, A.; De Bruin, L., y Gallagher, S. (2018), The Oxford Handbook of 4E Cognition, Oxford, UK: Oxford University Press.

Niikawa, T. (2020), A map of consciousness studies: questions and approaches, en Frontiers in Psychology 11, 530152.

Osuna-Mascaró, A. J. (2022), Innovative composite tool use by Goffin’s cockatoos (Cacatua goffiniana), en Scientific Reports 12_(1), 1-10.

Oviedo, L. (2022), Artificial Intelligence and Theology: Looking for a Positive —but Not Uncritical— Reception, en Zygon 57_(4), 938-952.

Pascal, B. (2015), Pensamientos (trad. Javier Zubiri), Madrid: Alianza Editorial, (ed. orig. 1669).

Peebles, W., y Xie, S. (2023), Scalable diffusion models with transformers, en Proceedings of the IEEE/CVF International Conference on Computer Vision 4195-4205.

Peirce, C. S. (1992), The Essential Peirce, Volume 2: Selected Philosophical Writings (1893-1913) (vol. 2), N. Houser et al. (eds.), Bloomington: Indiana University Press.

Pfandler, A.; Rümmele, S., Y Szeider, S. (2013), Backdoors to Abduction, en ArXiv, abs/1304.5961.

Pinker, S. (2007), The mystery of consciousness, en Time 169_(5), 58-62.

Plebe, A., Y Perconti, P. (2020), Plurality: The End of Singularity?, en Korotayev, A., Y Lepoire, D. (eds.), The 21st Century Singularity and Global Futures. World-Systems Evolution and Global Futures. Springer, Cham. https://doi.org/10.1007/978-3-030-33730-8_8

PUGH, G. E. (1977), The Biological Origin of Human Values, New York: Basic Books.

PUTNAM, H. (1975), The nature of mental states, en Philosophical Papers, Cambridge: Cambridge University Press, 429-440.

ROTH, G., y DICKE, U. (2005), Evolution of the brain and intelligence, en Trends in Cognitive Sciences 9, 250-257. https://doi.org/10.1016/j.tics.2005.03.005

SÁNCHEZ-CAÑIZARES, J. (2014), The Mind-Brain Problem and the Measurement Paradox of Quantum Mechanics: Should We Disentangle Them?, en Neuro- Quantology 12 (1): 76-95. https://10.14704/nq.2014.12.1.696

Sánchez-Cañizares, J. (2016), Neurociencia y mecánica cuántica, en Diccionario Interdisciplinar Austral, editado por Claudia E. Vanney, Ignacio Silva y Juan F. Franck. http://dia.austral.edu.ar/Neurociencia_y_mecánica_cuántica

Searle, J. R. (1980), Minds, brains, and programs, en Behavioral and brain sciences 3_(3), 417-424.

Searle, J. R. (2006), Chinese room argument, en Scholarpedia 4, 3100. https://doi.org/10.1002/0470018860.S00159

Sejnowski, T. J. (2018), The Deep Learning Revolution, MIT Press.

Sequeiros, L. (2023), “Tender puentes” versus “apologética”: dos estrategias en el encuentro ciencia-religión, en Fronteras CTR. https://blogs.comillas.edu/FronterasCTR/?p=8087

Seth, A. K., y Bayne, T. (2022), Theories of consciousness, en Nature Reviews Neuroscience 23_(7), 439-452.

Simondon, G. (2007), El modo de existencia de los objetos técnicos, Prometeo Libros Editorial.

Smith, D., y Schillaci, G. (2021), Why Build a Robot With Artificial Consciousness? How to Begin? A Cross-Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness, en Frontiers in Psychology 12, 1107. https://doi.org/10.3389/fpsyg.2021.530560

Surden, H. (2019), Artificial Intelligence and Law: An Overview. Georgia State University Law Review, 35, 19-22. https://ssrn.com/abstract=3411869

Taddeo, M., y Floridi, L. (2005), Solving the symbol grounding problem: A critical review of fifteen years of research, en J Experimental Theoretical Artifi Intell 17_(4), 419-445.

Tait, I.; Bensemann, J., y Nguyen, T. (2023), Building the Blocks of Being: The Attributes and Qualities Required for Consciousness, en Philosophies 8_(4), 52.

Tatay, J. (2023). François Euvé (2022), La science l’épreuve de Dieu?, en Razón y Fe 287 (1462), 315-317. Recuperado a partir de https://revistas.comillas.edu/index.php/razonyfe/article/view/20671

Tevet, O.; Gross, R. D.; Hodassman, S.; Rogachevsky, T.; Tzach, Y.; Meir, Y., y Kanter, I. (2024), Efficient shallow learning mechanism as an alternative to deep learning, en Physica A: Statistical Mechanics and its Applications, vol. 635, 129513.

Turing, A. M. (1950), Computing Machinery and Intelligence, en Mind. 49 (236): 433-460. https://doi.org/10.1093/mind/LIX.236.433

Turing, A. M. (1951), Can Digital Computers Think?, en B. J. Copeland (ed.), The Essential Turing (Oxford, 2004, online edn, Oxford Academic, 12 Nov. 2020), https://doi.org/10.1093/oso/9780198250791.003.0019

Udías, A. (2014), Los jesuitas y la ciencia. Una tradición en la Iglesia, Bilbao: Ediciones Mensajero.

Ullman, S. (2019), Using neuroscience to develop artificial intelligence, en Science 363_(6428), 692-693.

Unamuno, M. (1913), Mecanópolis, El imparcial.

Waterhouse, L. (2023), Why multiple intelligences theory is a neuromyth, en_Frontiers in Psychology_14.

Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; Chi, E. H.; Hashimoto, T.; Vinyals, O.; Liang, P.; Dean, J., y Fedus, W. (2022), Emergent abilities of large language models, en ArXiv preprint ArXiv:2206.07682.

Williams, J.; Fiore, S., y Jentsch, F. (2022), Supporting Artificial Social Intelligence With Theory of Mind, en Frontiers in Artificial Intelligence 5. https://doi.org/10.3389/frai.2022.750763

Zednik, C. (2019), Solving the black box problem: A normative framework for explainable artificial intelligence, en Philos Technol. https://doi.org/10.1007/s13347-019-00382-7

Zhuang, Y.; Cai, M.; Li, X.; Luo, X.; Yang, Q., y Wu, F. (2020), The Next Breakthroughs of Artificial Intelligence: The Interdisciplinary Nature of AI, en Engineering 6_(3), 245-247. https://doi.org/10.1016/j.eng.2020.01.009

Descargas

Publicado

2024-04-26

Cómo citar

Jurado-González, J. (2024). Por una aproximación humanista no reaccionaria a la IA. Razón Y Fe, 287(1463), 343–382. https://doi.org/10.14422/ryf.vol287.i1463.y2023.001