Changes

MyWikiBiz, Author Your Legacy — Thursday May 02, 2024
Jump to navigationJump to search
Line 90: Line 90:     
=====4.1.3.2. Inquiry Driven Systems=====
 
=====4.1.3.2. Inquiry Driven Systems=====
 +
 +
The stages of work just described lead me to introduce the concept of an "inquiry driven system".  In rough terms, this type of system is designed to integrate the functions of a data driven adaptive system and a rule driven intelligent system.  The idea is to have a system whose adaptive transformations are determined, not just by learning from observations alone, or else by reasoning from concepts alone, but by the interactions that occur between these two sources of knowledge.  A system which combines different contributions to its knowledge base, much less the mixed modes of empirical and rational types of knowledge, will find its next problem lies in reconciling the mismatches that arise between these sources.  Thus, one arrives at the concept of an adaptive knowledge-base whose changes over time are driven by the differences that it encounters between what is observed in the data it gathers and what is predicted by the laws it knows.  This sounds, at the proper theoretical distance, like an echo of an error-controlled cybernetic system.  Moreover, it falls in line with the general description of scientific inquiry.  Finally, it raises the interesting possibility that good formulations of these "differences of opinion" might allow one to find differential laws for the time evolution of inquiry processes.
 +
 +
There are several implications of my approach that I need to emphasize.  Many distractions can be avoided if I continue to guide my approach by the two questions raised above, of principles and extensions, and if I guard against confusing what they ask and do not ask.  The issues that surround these points, concerning the actual nature and the possible nurture of the capacity for inquiry, can be taken up shortly.  But first I need to deal with a preliminary source of confusion.  This has to do with the two vocabularies, the language of the application domain, that talks about the higher order functions and intentions of software users, and the language of the resource domain, that describes the primitive computational elements to which software designers must try to reduce the problems that confront them.  I am forced to use, or at least to mention, both of these terminologies in my effort to bridge the gap between them, but each of them plays a different role in the work.
 +
 +
In studies of formal specifications the designations "reduced language" and "reducing language" are sometimes used to discuss the two roles of language that are being encountered here.  It is a characteristic of some forms of reductionism to call a language "reduced" simply because it is intended to be reduced and long before it is actually reduced, but aside from that this language of "reduced" and "reducing" can still be useful.  The reduced language, or the language that is targeted to be reduced, is the language of the application, practice, or target domain.  The reducing language, or the language that is intended to provide the sources of explanation and the resources for reduction, is the language of the resource, method, or base domain.  I will use all of these terms, with the following two qualifications.
 +
 +
First, I need to note a trivial caution.  One's sense of "source" and "target" will often get switched depending on one's direction of work.  Further, these terms are reserved in category theory to refer to the domain and the codomain of a function, mapping, or transformation.  This will limit their use, in the above sense, to informal contexts.
 +
 +
Now, I must deal with a more substantive issue.  In trying to automate even a fraction of such grand capacities as intelligence and inquiry, it is seldom that we totally succeed in reducing one domain to the other.  The reduction attempt will usually result in our saying something like this:  That we have reduced the capacity A in the application domain to the sum of the capacity B in our base domain plus some residue C of unanalyzed abilities that must be called in from outside the basic set.  In effect, the residual abilities are assigned to the human side of the interface, that is, they are attributed to the conscious observation, the common sense, or the creative ingenuity of users and programmers.
 +
 +
In the theory of recursive functions, this situation is expressed by saying that A is a "relatively computable" function, more specifically, that A is computable relative to the assumption of an "oracle" for C.  For this reason, I usually speak of "relating" a task A to a method B, rather than fully "reducing" it.  A measure of initial success is often achieved when a form of analysis relates or connects an application task to a basic method, and this usually happens long before a set of tasks are completely reduced to a set of methods.  The catch is whether the basic set of resources is already implemented, or is just being promised, and whether the residual ability has a lower complexity than the original task, or is actually more difficult.
 +
 +
At this point I can return to the task of analyzing and extending the capacity for inquiry.  In order to enhance a human capacity it is first necessary to understand its process.
 +
 +
To extend a human capacity we need to know the critical functions which support that ability, and this involves us in a theory of the practice domain.  This means that most of the language describing the target functions will come from sources outside the areas of systems theory and software engineering.  The first thoughts that we take for our specs will come from the common parlance that everyone uses to talk about learning and reasoning, and the rest will come from the special fields which study these abilities, from psychology, education, logic and the philosophy of science.  This particular hybrid of work easily fits under the broad banner of artificial intelligence, yet I need to repeat that my principal aim is not to build any kind of autonomous intelligence, but simply to amplify our own capacity for inquiry.
 +
 +
There are many well-reasoned and well-respected paradigms for the study of learning and reasoning, any one of which I might have chosen as a blueprint for the architecture of inquiry.  The model of inquiry that works best for me is one with a solid standing in the philosophy of science and whose origins are entwined with the very beginnings of symbolic logic.  Its practical applications to education and social issues have been studied in depth, and aspects of it have received attention in the AI literature (Refs 1-8).  This is the pragmatic model of inquiry, formulated by C.S. Peirce from his lifelong investigations of classical logic and experimental reasoning.  For my purposes, all this certification means is that the model has survived many years of hard knocks testing, and is therefore a good candidate for further trial.  Since we are still near the beginning of efforts to computerize inquiry, it is not necessary to prove that this is the best of all possible models.  At this early stage, any good ideas would help.
 +
 +
My purpose in looking to the practical arena of inquiry and to its associated literature is to extract a body of tasks that are in real demand and to start with a stock of plausible suggestions for ways to meet their requirements.  Some of what one finds depicted in current pictures of learning and reasoning may turn out to be inconsistent or unrealizable projections, beyond the scope of any present methods or possible technology to implement.  This is the very sort of thing that one ought to be interested in finding out!  It is one of the benefits of submitting theories to trial by computer that we obtain this knowledge.  Of course, the fact that no one can presently find a way to render a concept effectively computable does not prove that it is unworkable, but it does place the idea in a different empirical class.
 +
 +
This should be enough to say about why it is sometimes necessary to cite the language of other fields and to critically reflect on the associated concepts in the process of doing work within the disciplines of systems theory and software engineering.  To sum it up, it is not a question of entering another field or absorbing its materials, but of finding a good standpoint on one's own grounds from which to tackle the problems that the outside world presents.
 +
 +
Sorting out which procedures are effective in inquiry and finding out which functions are feasible to implement is a job can be done better in the hard light demanded by formalized programs.  But there is nothing wrong in principle with a top down approach, so long as one does come down, that is, so long as one eventually descends from a level of purely topical reasoning.  I will follow the analogy of a recursive program that progresses down discrete steps to its base, stepwise refining the topics of higher level specifications to arrive at their more concrete details.  The best reinforcement for such a program is to maintain a parallel effort that builds up competencies from fundamentals.
 +
 +
Once I have addressed the question of what the principles are that enable human inquiry it brings me to the question of how I would set out to improve the human capacity for inquiry by computational means.
 +
 +
Within the field of AI there are many ways of simulating and supporting learning and reasoning that would not involve me in systems theory proper, that is, in reflecting on mathematically defined systems or in considering the dynamics that automata trace out through abstract state spaces.  However, I have chosen to take the system-theoretic route for several reasons, which I will now discuss.
 +
 +
First, if we succeed in understanding intelligent inquiry in terms of system-theoretic properties and processes, it equips this knowledge with the greatest degree of transferability between comparable systems.  In short, it makes our knowledge robust, and not narrowly limited to a particular instantiation of the target capacity.
 +
 +
Second, if we organize our thinking in terms of a coherent system or integrated agent which carries out inquiries, it helps to manage the complexity of the design problem by splitting it into discrete stages.  This strategy is especially useful in dealing with the recursive or reflexive quality that bedevils all such inquiries into inquiry itself.  This aspect of self-application in the problem is probably unavoidable, due to the following facts.  Human beings are complex agents, and any system likely to support significant inquiry is bound to surpass the complexity of most systems we can fully analyze today.  Research into complex systems is one of the jobs that will depend on intelligent software tools to advance in the future.  For this we need programs that can follow the drift of inquiry and perhaps even scout out fruitful directions of exploration.  Programs to do this will need to acquire a heuristic model of the inquiry process they are designed to assist.  And so it goes.  Programs for inquiry will pull themselves up by their own bootstraps.
 +
 +
Taking as given the system-theoretic approach from now on, I can focus and rephrase my question about the technical enhancement of inquiry.  How can we put computational foundations under the theoretical models of inquiry, at least, the ones we discover to be accessible?  In more detail, what is the depth and content of the task analysis that we need to relate the higher order functions of inquiry with the primitive elements given in systems theory and software engineering?  Connecting the requirements of a formal theory of inquiry with the resources of mathematical systems theory has led me to the concept of inquiry driven systems.
 +
 +
The concept of an inquiry driven system is intended to capture the essential properties of a broad class of intelligent systems, and to highlight the crucial processes which support learning and reasoning in natural and cultural systems.  The defining properties of inquiry driven systems are discussed in the next few paragraphs.  I then consider what is needed to supply these abstractions with operational definitions, concentrating on the terms of mathematical systems theory as a suitable foundation.  After this, I discuss my plans to implement a software system which is designed to help analyze the qualitative behavior of complex systems, inquiry driven systems in particular.
 +
 +
An inquiry driven system has components of state, accessible to the system itself, which characterize the norms of its experience.  The idea of a norm has two meanings, both of which are useful here.  In one sense, we have the descriptive regularities which are observed in summaries of past experience.  These norms are assumed to govern the expectable sequences of future states, as determined by natural laws.  In another sense, we have the prescriptive policies which are selected with an eye to future experience.  These norms govern the intendable goals of processes, as controlled by deliberate choices.  Collectively, these components constitute the knowledge base or intellectual component of the system.
 +
 +
An inquiry driven system, in the simplest cases worth talking about, requires at least three different modalities of knowledge component, referred to as the expectations, observations, and intentions of the system.  Each of these components has the status of a theory, that is, a propositional code which the agent of the system carries along and maintains with itself through all its changes of state, possibly updating it as the need arises in experience.  However, all of these theories have reference to a common world, indicating under their varying lights more or less overlapping regions in the state space of the system, or in some derivative or extension of the basic state space.
 +
 +
The inquiry process is driven by the nature and extent of the differences existing at any time among its principal theories, for example, its expectations, observations, and intentions.  These discrepancies are evidenced by differences in the sets of models which satisfy the separate theories.  Normally, human beings experience a high level of disparity among these theories as a dissatisfying situation, a state of cognitive discord.  For people, the incongruity of cognitive elements is accompanied by an unsettled affective state, in Peirce's phrase, the "irritation of doubt".  A person in this situation is usually motivated to reduce the annoying disturbance by some action, all of which activities we may classify under the heading of inquiry processes.
 +
 +
Without insisting on strict determinism, we can say that the inquiry process is lawful if there is any kind of informative relationship connecting the state of cognitive discord at each time with the ensuing state transitions of the system.  Expressed in human terms, a difference between expectations and observations is experienced as a surprise to be explained, a difference between observations and intentions is experienced as a problem to be solved.  We begin to understand a particular example of inquiry when we can describe the relation between the intellectual state of its agent and the subsequent action that the agent undertakes.
 +
 +
These simple facts, the features of inquiry outlined above, already raise a number of issues, some of which are open problems that my research will have to address.  Given the goal of constructing supports for inquiry on the grounds of systems theory, each of these difficulties is an obstacle to progress in the chosen direction, to understanding the capacity for inquiry as a systems property.  In the next few paragraphs I discuss a critical problem to be solved in this approach, indicating its character to the extent I can succeed at present, and I suggest a reasonable way of proceeding.
 +
 +
In human inquiry there is always a relation between cognitive and affective features of experience.  We have a sense of how much harmony or discord is present in a situation, and we rely on the intensity of this sensation as one measure of how to proceed with inquiry.  This works so automatically that we have trouble distinguishing the affective and cognitive aspects of the irritating doubt that drives the process.  In the artificial systems we build to support inquiry, what measures can we take to supply this sense or arrange a substitute for it?  If the proper measure of doubt cannot be formalized, then all responsibility for judging it will have to be assigned to the human side of the interface.  This would greatly reduce the usefulness of the projected software.
 +
 +
The unsettled state which instigates inquiry is characterized by a high level of uncertainty.  The settled state of knowledge at the end of inquiry is achieved by reducing this uncertainty to a minimum.  Within the framework of information theory we have a concept of uncertainty, the entropy of a probability distribution, as being something we can measure.  Certainly, how we feel about entropy does not enter the equation.  Can we form a connection between the kind of doubt that drives human inquiry and the kind of uncertainty that is measured on scales of information content?  If so, this would allow important dynamic properties of inquiry driven systems to be studied in abstraction from the affective qualities of the disagreements which drive them.  With respect to the measurable aspects of uncertainty, inquiry driven systems could be taken as special types of control systems, where the variable to be controlled is the total amount of disparity or dispersion in the knowledge base of the system.
 +
 +
The assumption of modularity, that the affective and intellectual aspects of inquiry can be disentangled into separate components of the system, is a natural one to make.  Whenever it holds, even approximately, it simplifies the task of understanding and permits the analyst or designer to assign responsibility for these factors to independent modules of the simulation or implementation.
 +
 +
However, this assumption appears to be false in general, or true only in approaching certain properties of inquiry.  Other features of inquiry are not completely understandable on this basis.  To tackle the more refractory properties, I will be forced to examine the concept of a measure which separates the affective and intellectual impacts of disorder.  To the extent that this issue can be resolved by analysis, I believe that it hinges on the characters that make a measure objective, that is, an impression whose value is invariant over many different perspectives and interpretations, as opposed to being the measure of a purely subjective moment, that is, an impression whose value is limited to a special interpretation or perspective.
 +
 +
The preceding discussion has indicated a few of the properties that are attributed to inquiry and its agents and has initiated an analysis of their underlying principles.  Now I engage the task of giving these processes operational definitions in the framework of mathematical systems theory.
 +
 +
Consider the inquiry driven system as described by a set of variables:
 +
 +
: x1, ... , xn, a1, ... , ar.
 +
 +
The xi are regarded as ordinary state variables and the aj are regarded as variables codifying the state of knowledge with respect to a variety of different questions.  Many of the parameters aj will simply anticipate or echo the transient features of state that are swept out in reality by the ordinary variables xi.  This puts these information variables subject to fairly direct forms of interpretation, namely, as icons and indices of the ordinary state of the system.  However, in order for the system to have a knowledge base which takes a propositional stance with respect to its own state space, other information variables among the aj will have to be used in less direct ways, in other words, made subject to more symbolic interpretations.  In particular, some of them will be required to serve as the signs of logical operators.
 +
 +
The most general term that I can find to describe the informational parameters aj is to call them "signs".  These are the syntactic building blocks that go into constructing the various knowledge bases of the inquiry driven system.  Although these variables can be employed in a simple analogue fashion to represent information about remembered, observed, or intended states, ultimately it is necessary for the system to have a formal syntax of expressions in which propositions about states can be represented and manipulated.  I have already implemented a fairly efficient way of doing this, using only three arbitrary symbols beyond the set that is used to echo the ordinary features of state.
 +
 +
A task that remains for future work is to operationalize a suitable measure of difference between alternative propositions about the world, that is, to sort out competing statements about the state space of the system.  A successful measure will gauge the differences in objective models and not be overly sensitive to unimportant variations in syntax.  This means that its first priority is to recognize logical equivalence classes of expressions, in other words, to discriminate between members of different equivalence classes, but to respond in equal measure to every member of the same equivalence class.  This requirement brings the project within the fold of logical inquiry.  Along with finding an adequate measure of difference between propositions, it is necessary to specify how these differences can determine, in some measure, the state transitions of an inquiry driven system.  At this juncture, a variety of suggestive analogies arise, connecting the logical or qualitative problem just stated with the kinds of questions that are commonly treated in differential geometry and in geometric representations of dynamics.
    
===4.2. The Context of Inquiry===
 
===4.2. The Context of Inquiry===
12,080

edits

Navigation menu