A CONSTRUCTIVIST METHOD FOR THE ANALYSIS OF NETWORKED COGNITIVE COMMUNICATION AND THE ASSESSMENT OF COLLABORATIVE LEARNING AND Knowledge-Building

This article presents a discourse analysis method designed to study networked cognitive communication processes in knowledge communities, such as conceptual change, higher order learning and knowledge building. The method is grounded on genetic epistemology and integrates constructivist and socioconstructivist theoretical concepts. The sentence (understood as judgment) is chosen as the unit of analysis, and the application of the method is further explained. In addition, a study of transcripts in an asynchronous networked community of nurses illustrates the method and demonstrates how conceptual change, collaborative learning and knowledge building can be identified. Advantages and limitations of the method are also discussed.


I. INTRODUCTION
This paper presents a discourse analysis method for capturing conceptual change, higher order learning and knowledge building in networked communication processes taking place in online conferences.Since electronic asynchronous communication tools were made available to the public, educators and practitioners from different fields recognized that the tools could be integrated for pedagogical or professional use.Although most people have a general perception that what happens in the electronic conferences is positive, sound conceptual tools for assessing different processes, especially collaborative learning, have not yet been provided.
To support those processes, different conferencing software and online tools were developed to help asynchronous community participants to explore, restructure, deepen, transform knowledge and to learn, resulting in a myriad of systems.Some systems provide statistical tools enabling users to get some quantitative indicators of participation such as number of messages written, read, and responded.Other systems provide tools that seem to support learning activities such as key wording, search tools, and different sorts of scaffolds.However, the question about how to assess collaborative networked learning and knowledge building remains largely unknown, and systems have not yet achieved such a degree of development.
I have been working for the last six years with this problem and developed a research method designed to study cognitive communication processes of networked conversations such as learning and knowledge building.To demonstrate the method, I present a study of online asynchronous exchanges based on quantitative and qualitative discourse data collected in a community of nurses.This community was established to share expertise on heart care and to discuss practices.The software that enabled this community is Knowledge Forum.

II. THEORETICAL FOUNDATIONS
The method introduced here is grounded on genetic epistemology [1, 2, 3, 4. 5, 6, 7].This epistemology maintains that knowledge is constructed and that logical systems precede meaning systems; even when the content of what we communicate is affective or emotional, it is structured logically upon speech.Example: I do love or I do not love, if I love and do something to get loved then the person I love will love me (or not).For example, if somebody is emotionally out of control, meaning systems prevail and logical reasoning is affected: psychologists and psychiatrists will consider the person ill because she/he is unable to control herself/himself, is unable to operate logically upon his/her feelings.The genetic model of knowledge is considered by Piaget himself [8] as evolutionary kantism in which pure reason should be understood as the genetic givens that provide the possibility that knowledge be constructed in the interaction between the epistemic subject (the subject of knowledge) and the environment [9].In addition, I integrate elements of (1) Vygotsky's learning theory [10] such as the understanding that language is the natural mediating communication tool that links logic and meanings; (2) the viewpoint according to which distributed cognitions can only be considered along with individual cognition [11]; and (3) the idea that knowledge building is a creative process leading to innovation [12].
Vygotsky's epistemological origins are rooted in Hegelian post-Kantian philosophy and Marxism.Socioconstructivism considers the social environment primary to the individual.In contrast but somewhat complementary, constructivism (genetic epistemology), rooted in evolutionary kantism, considers that the genetic possibilities of brain functioning are actualized in and through experience.Deep divergences about the views of both Vygotsky and Piaget are illusory as shown by Cole and Wertsch [13] and by Piaget himself [14] in the book in which he responded to Vygotsky's remarks about his theory.
Piaget's and Vygotsky's works were not focused on communication.Piaget's model of knowledge proposes that logic expresses abstractly the fundamental operations of the brain.Individual thinking, thus, is governed by procedures that progress along time in an active process of conceptual assimilation and accommodation leading to adaptation to the social environment.This process shapes understanding through recursive comprehension and interpretation.This is what Piaget refers to as knowledge construction: a formal representational approach to the explaining of the possibility of knowing.Vygotsky never proposed an epistemology.Rather, his psychology stressed that cognitive development occurred fundamentally through collaborative processes that could advance individuals beyond expected genetic possibilities.This is what he means by the social construction of knowledge, a content-like (cultural) approach to the explaining of the possibility of knowing.
Salomon [11] quite correctly questions where the individual lies in "social" cognition.He argues that distributed cognition is a notion that could stand if the "social" integrates the "individual" because cognitions are not distributed; procedural knowledge is totally individual; representations are also

III. METHOD A. Transcript Analysis Methods
Studies on conferencing interaction are ubiquitous, but there are major concerns about the focus of the methods developed to analyze those networked conversations.Consequently, and given the nonexperimental nature of the studies, reliability is an issue.Most theoretical and methodological qualitative research proceeds by drawing meaning from context.In addition, some authors point to this characteristic as an integrative part of the definition of qualitative methods because "The aim of these methods is to discover novelties and to develop empirically grounded theories rather to verify what is already known (e.g. a theory that has already been formulated)."[18, cf. pp. 635].Obviously, this is a misleading affirmation because it does not take into consideration qualitative epistemological approaches that are not strictly empiricist.My approach is ecological and dynamic, i.e. it takes into consideration both neurobiological genetic givens as well as the importance of the meaning making collaborative process that develops over time.In other words, it is an essentially qualitative approach in which meanings grounded in the lives of those interacting asynchronously are understood within the framework of the subjacent mental operations undertaken by each individual in the collaborative effort of exchanging ideas.Mental operations (or mental procedures) are not exclusively qualitative; they are partially innate and partially learned logical operations: in the same way a person does this or that, an individual reasons if this then that.
To my knowledge, most published discourse analyses of networked conversation are grounded in educational data.Research goals, theoretical perspectives, and methods vary across studies and are not replicated.The result is a very heterogeneous corpus of scientific research that could be defined as exploratory.Curiously enough, most of those studies considered qualitative rely on the quantitative measurement of qualitative categories.Measuring qualitative categories can indeed suggest certain trends.However, such studies are very limited because summing up categories says nothing about the knowledge building process.It is only through attention to the process that collaborative conceptual change and learning can be assessed.These two critical aspects of knowledge building are embedded in a continuous progressive process.Examples of transcript analysis methods abound.The following descriptions illustrate some shortcomings in discourse analysis research.
1. Mason [19,20] -The author proposed a typology for the study of educational online conference messages with a view of pedagogical values that integrates both quantitative data usage information as well as the discourse.The types are conceptualized upon a number of questions: Do the participants build on previous messages, draw on their own expertise, refer to course material, refer to material outside the course, and initiate new ideas?Does the instructor control, direct or facilitate the exchanges?No further information is given concerning the coding process and inter-reliability procedures.However, the author proposes this method as one able to contribute to Postpositivist or Interpretive paradigms.[21] -In this study, the authors wished to establish relationships between the teaching strategies of intervention and online interaction.They identified three categories of what they call discourse styles: questions only, statement only, and conversational (first coding level).The first two were considered formal interventions while the third was considered mainly informal.In addition, the students' responses were categorized according to a number of variables: unlinked, references and linked (second coding level), as well as unsupported and supported (third coding level).Inter-coder reliability blind procedures were performed achieving 90% of agreement.The study proceeded by having quantitative measures using the chi-square statistical procedure guiding an in-depth qualitative analysis of the content.Both numbers and verbatim are presented to discuss results.No clear theory is advocated to sustain the method although the authors refer to a number of studies about the subject.[22] -In this study, the author proposed a content analysis method to assess learning processes.The unit of analysis is "meaning" but no clear criteria are presented to identify how to circumscribe it.A five level analytical model is proposed in which the participative, social, interactive, cognitive and metacognitive dimensions of the learning process are studied.The participative dimension uses quantitative usage data of both groups of learners and individuals (distinguishing learners from educators).The social dimension uses transcripts to assess the occurrence of comments that indicate an intention to socialize.The interactive dimension looks at direct and indirect feedback given by a participant as a reaction to others.The cognitive dimension identifies cognitive skills related to reasoning in an attempt to distinguish surface from in-depth information processing.The metacognitive dimension is one of self-awareness of declarative and procedural knowledge.All dimensions are combined so the researcher can make sense of them and judge the quality of the learning process.This method does not follow a well-defined theory, and each dimension draws from a different source, resulting in a lack of epistemological coherence.In addition, dimensions overlap (participation in an online conference can be undertaken as a social, interactive, cognitive as well as a metacognitive act).No details about the coding are provided and no inter-coder reliability procedure is reported.[23] -In this study, the authors assessed inquiry capabilities as well as critical thinking through a multi-methodology: a survey to evaluate students perceptions that was quantitatively treated using multivariate analysis of variance (MANOVA), a factor analysis, semi-structured interviews, a focus group, and a transcript analysis using the Atlas-ti software.No description about the method and the use of inter-coder reliability procedures is provided.The authors state that they triangulated the data but they do not indicate which theoretical concepts were applied for such triangulation.5. Newman, Webb, and Cochrane [24] -In this study, the authors presented an analytical method for the study of critical thinking (considered to be content analysis).Building on contributions of other authors, they presented a list of what they consider to be indicators of critical thinking.To those indicators the authors attributed positive or negative marks to suggest that a critical indicator or an uncritical indicator is present.Finally, they applied the formula (x+ -x-) / (x+ + x-).Results are converted to a scale from -1 (uncritical) to +1 (critical).The ratios resulting from the application of this formula are intended to reveal critical thinking.No details about the coding procedures are provided.No inter-coder reliability procedure was adopted.

Anderson, & Garrison
6. Howell-Richardson, and Mellar [25] -In this study, the authors reported the use of two levels of analysis, each one using a different unit.In the first level (message as the unit of analysis), the length and distribution of messages are quantitatively calculated.In addition, the relationships between messages are studied by considering explicit references to previous messages, and explicit repetition of lexical items.In the authors' second level, referred to as interaction analysis, the concept of illocutionary act (from Speech Act theory) defines the unit.These units are coded according to three categories: (1) whether the focus of the act is on the group, on-task or off-task; (2) to whom the act is addressed; and (3) the degree of explicitness of references to other messages.In addition, these broad categories are broken down into subcategories.The results are quantitatively measured by percentages.No triangulating or integrating factors among the two levels are mentioned.No inter-coder reliability procedure is reported.7. Mowrer [26] -In this study, the author was interested in the interactive nature of participation, i.e. feedback given to each other in an educational context, and the influence of those responses.Computer usage data and transcriptions were used in this quantitative study.The researcher identified the main topics discussed in the conference and created 14 categories (service learning, student satisfaction, helpful activities, student suggestions, student complaints, good communication, questions to instructor, questions to peers, class structure, helpful hints, encouraging comments, learning advancement, miscellaneous comments, grade assessment).An inter-coder reliability procedure was performed achieving 85% of agreement between two coders.In addition, the categories were further broke down into sub-categories.The analysis undertaken used both chi-square statistical procedure and analysis of frequency to study the students' postings according to each category.No theoretical foundation is provided to explain the emergence of the categories (grounded theory?) from the online discourse.8. Gunawardena, Lowe, and Anderson [27] -In this study, the authors introduced a thoughtful model of analysis to assess the social construction of knowledge and collaborative learning.According to this theoretical model, computer-mediated communicative interaction is understood as the production of new knowledge or the understanding of meanings.In addition, the model draws on grounded theory principles to propose a five phase evolution of negotiation leading to the co-construction of knowledge: sharing and comparing information, the discovery and exploration of dissonance or inconsistency among ideals, concepts or statements, negotiation of meaning and construction of knowledge, testing and modification of proposed synthesis or co-construction, and agreement statement and applications of newly constructed meaning.Each phase is further subcategorized.The unit of analysis adopted was the message.A coding sheet was prepared and then frequencies were calculated.The messages were re-read with the quantitative patterns in mind in order to understand the process more deeply.No details about the coding are provided and no inter-coder reliability procedure was used.9. Marttunen [28] -This study focused on argumentation, defined as the process of grounding stated claims.The data were treated in a two stage process.First, argumentation and counter-argumentation were identified and analyzed.Second, only counter-argumentation was analyzed.In addition, messages were classified according to a three level scale: good, moderate, and poor argumentation.Blind intercoder reliability procedures were performed achieving .71(p < .01)using the Bryman and Cramer method (two coders).A log-linear analysis was applied to quantify arguments and counter-arguments.Analysis applied the three-level scale presented above.10.Bullen [29] -This case study included quantitative usage data and content analysis in which, as above, critical and uncritical thinking skills are identified in the online discourse.In addition, a three level scale of critical thinking was used to classify results: extensive, moderate and minimum use of critical thinking.No information is provided about the method.No inter-coder reliability procedure is reported.Results are presented in a narrative form, classical in case studies, in which quantitative percentages illustrate the quality.11.Kanuka, & Anderson [30] -In this study, the authors undertook a multi-instrument analysis using a survey to assess the perception that students had about their learning, semi-structured telephone interviews to better understand the students' experiences, and transcript analysis using the Gunawardena, Lowe, & Anderson model [27, cf. above].They report having used grounded theory data analysis methods to study interactions but it is not at all clear whether this was added as an additional instrument or if the theory was being used as a triangulating factor.There are no procedures reported about data triangulation, the quantitative methods used, and inter-coder reliability.
12. Hara, Bonk, & Angeli [31] -In this study, Henri's method [22] was applied but somewhat transformed from an essentially qualitative method into a quantitative one.The authors consider criticisms [25] that Henri's method [22] is unreliable.Hence, they adopted a paragraph as the unit of analysis or "idea" unit.Inter-coder reliability procedures were performed and applied separately to each dimension achieving from 71% to 78% of agreement (three coders).Conclusions were drawn from chisquare coefficients and percentages.
13. Fahy, Crawford, Ally, Cookson, Keller, & Prosser [32] -In this study, the authors seemed to be willing to assess computer-mediated communication interaction as well as knowledge construction.They reported using three methods.The first defines postings as being vertical (seeking an answer from someone who knows more about a given subject matter) or horizontal (interacting, assuming a plus egalitarian situation in which participants co-construct) in order to classify them as simple assimilation of information or knowledge construction.Theoretical grounding for this position is sought in Vygotsky's notion of the zone of proximal development.The second method (from [29]) looks at critical thinking and participation.The third classifies discourse according to the following categories: vertical questioning, horizontal questioning, statements, reflections, and scaffolding.Blind inter-coder reliability procedures were performed achieving from 70% to 90% of agreement depending of the grid used (two coders).No integrative theoretical explanations are presented and no triangulating factor was used to interpret data resulting from the three different grids of analysis.

B. Theoretical Standing of the Method
After having worked with transcript analysis for a number of years and searching for reliable tools to study knowledge building processes, I developed a qualitative method that also takes quantitative data into consideration.The method, referred to as ecological constructivist perspective integrates both the contextual content of the conferences (declarative knowledge woven in collaboration across time = qualitative) and the underlying logical operations (procedures constructed in collaboration = qualitative and quantitative).In addition, this method combines what Piaget and cognitive science theorists have stressed about the central role of procedural knowledge, and what Vygotsky highlighted concerning the production of meaning as social process acting upon the individuals: logical structures and meanings.
The ecological constructivist perspective suggests that the social environment and the individuals are part of a symbolic ecosystem which is the networked cognitive communication.Configurations of meanings (meanings upon logical structures) are shared and evolve in collaboration across time [15,33].My approach focuses on how symbolic ideas about practice (resulting in symbolic actions upon and in the world) enable logically structured argumentation (the genetic workings of the brain that circumscribe and limit the possibilities of symbolic ideas about practice) in networked environments [15].In other words, it is a search for understanding the underlying logical structure of collaborative networked argumentation in order to capture how people make sense of meanings together [1], innovate, create and advance ideas [34].

Hypothesis Behind the Method
Piaget always remarked that hypothesizing was the gist of human thinking [3], be it in the form of inferences or in the form of naïve observations (such as that of children) or reflective reasoning.This hypothesis has never been refuted in fifty years of cognitive science following his outstanding contributions to epistemology and developmental and cognitive psychology.My hypothesis [15] is that, consistent with Piaget's hypothesis [3] and Grize's formulation of the written communication process [17], natural conversation reveals that gist essentially through the conditional logical operation, hence through hypotheses formulation and inferencing.In addition, when natural conversation occurs through written discourse, this process of hypothesis construction and re-construction is woven in collaboration.In other words, groups engaged in electronic conferencing advance (or not) hypothesizing and inferencing through a collaborative process whose roots lie both in the background knowledge of each interlocutor as well as the knowledge created in their written action.I intentionally use the term networked conversation to mean essentially the same as networked argumentation to describe this process in the context of electronic conferencing through learning asynchronous networks (ALN).Hegenberg [35] considers every conditional structure (If-Then) in which a number of premises lead to a conclusion as an argument.This definition is much broader than that which is commonly used by argumentation theorists [36], in which argumentation is seen as a very specific process of affirming, refuting by the presentation of evidence, and concluding, as did Toulmin [37] when presenting argumentation in terms of the normative procedures of juridical contexts.Hegenberg's position [35] is in line with that of Grize [17] who explains that all conversational activity should be seen as argumentation.Indeed, when people do not need to communicate, they do not need to argue anything.I would like to note that the claim that hypothesizing is in the gist of human thinking is, at least for those who adhere to genetic epistemology, self-evident.However, verifying the occurrence of conceptual change (i.e. the process through which a person re-assesses prior knowledge, leading to transformation and re-equilibration) and whether it could be attributed exclusively to an individual or to a collaborative process, is complex.Verifying that hypothesizing is the gist of human thinking and that conceptual change is, essentially, a process grounded on hypothesizing, is not obvious.To understand whether higher order learning (i.e. the process through which conceptual change is accommodated transforming previous mental structures into new ones) occurred or not, and, if it did, whether it could be attributed to an individual process of individual meta-memorization or a result of a collaborative process, needs adequate methods.In addition, to verify the occurrence of knowledge building, an ongoing conceptual process of collaborative learning needs to be creative, innovative and collective.The method that I developed responds, at least in an indicative form, if networked argumentation process reveals collaborative conceptual change, learning and knowledge building.

The Critical Problem of the Unit of Analysis
In a literature review of transcript analysis methods, Rourke et al. [38] point out that the unit of analysis varies from the phrase to a whole text.Little is said about the theories behind the researchers' decisions to adopt a given unit.It is clear that the choice of a unit, with few exceptions, rarely follows an epistemological and theoretical coherence and soundness.This problem is not minor.It has serious implications for the way networked conversation is studied, and highlights that research on the analysis of knowledge building processes in electronic conferencing is in its infancy.
There are two factors that should be taken into account when approaching the problem of the unit of analysis.The first is related to the following question: what is the human cognitive unit of thinking?The second is related to the technology: what is the digital system used to structure the human cognitive units of thinking?Consistent with genetic epistemology, we adopt the sentence as the human cognitive unit of analysis.The rationale is traced back in the process of child development.
Piaget demonstrated that meanings are logically structured well before the advent of language [39,40].Babies pass through a progressive process of mastering simple actions (such as that of assimilating the process of moving the eyes) to complex actions (such as those of assimilating the process of moving the eyes along with the head, the arm, the hand, etc.), that have logical structures subjacent to the content of experience.These actions, assimilated by the child, are structured as schemes, which are, according to Piaget what can be generalized from a given action [39].Modern cognitive science adopted this Piagetian concept as we can easily check in cognitive science manuals [41].Schemes, thus, are structures that are generated from a specific context that can be applied in different contexts.These structures pass from action to concept with the advent of language.The question here is, thus, how does this happen and how are the resulting meaning systems expressed through language?There is continuity from the assimilation of acting to one of conceptualizing.The same way that an action can be generalized as an action scheme, a concept can be generalized as a conceptual scheme.For example, after learning that the hands can grab the mother's hands, the child generalizes this action in order to grab dolls, bottles, etc.After learning how to recognize and categorize an apple as a fruit, the child will generalize and do the same to categorize a toy truck as a vehicle.
The reasoning just presented would lead us to choose, then, a concept as the unit of analysis of any kind of discourse.However, concepts cannot be understood in isolation because they are related to other structure levels (for example: the concept "orange" can be integrated to the concept of "fruit" that can be integrated to the concept of "vegetal kingdom").Similarly, those levels cannot be abstracted from the context of their occurrence, i.e. logical systems are always related to meaning systems.To this partially innate and partially constructed logic of actions moved by contextual meanings, children start through imitation to name words that represent actions, actors, and things involved.Although one could hypothesize that the word would be the original meaning unit, Piagetian research shows that this is not the case.Meanings cannot be dissociated from a larger logical structure that organizes a meaning system.This inter-relatedness between logical structures and systems of meanings makes it unreasonable that a word could be understood as a unit.For example, when a young toddler looks at mom, raises his arm and says "bua, bua" (the way my youngest son used to name "bottle" when he was one year old) he is not naming the bottle itself but the whole meaning system involving this central word.This logical and meaning system "I am hungry -I want the bottle -I want to drink milk -I want to have comfort" is subjacent to the word "bua, bua".Logically, this structure is a hypothetical one.From a meaning viewpoint, it reveals that communicating hypotheses is a way to fulfill basic needs: "If I am hungry, then I want to drink milk.If I cry "bua-bua", then mom will listen.If mom listens, then she will understand my call, prepare a bottle for me, feed me, and make me happy".
The consequence of this attribution is that "individual" meanings cannot be isolated from a meaning system, which is expressed by a logical structure.Every meaning system reflects an action (which can be physical or discursive, or both) that has a subjacent logic that emerges within the appropriate context.Piaget [42] explains that action schemes are the products of assimilation processes in which previous procedures related to sequences of movements are applied to new situations while assimilation is the process in which new or old objects are incorporated to known schemes.From the language viewpoint an analogy applies: Judgments are acts that put concepts together or apply them to objects [42].Concepts are systematic unities in which extension (logical) defines the class and in which comprehension consists of properties or relations, a predicate being itself a relation [42].In other words, the meaning unit is not a concept but the judgment.In terms of discourse, the minimal unit reflecting a judgment (which contains, necessarily, a verb) is the sentence.A sentence is a word, clause, or phrase or a group of clauses or phrases forming a syntactic unit which expresses an assertion, a question, a command, a wish, an exclamation, or the performance of an action that in writing usually begins with a capital letter and concludes with appropriate end punctuation [43, cf. online].
We add to this definition that the presence of a verb is essential and that sentences express judgments that are not only conceptual (scientific knowledge) but also notional (popular knowledge).

Assessing Conceptual Change, Learning, and Knowledge Building
How do I, then, assess collaborative conceptual change and learning?Piaget [44] explains the process of awareness as the engine of change.There is a net difference between being able to succeed when performing an action and to understand it.This finding followed research [44] to verify the difference between succeeding and understanding in the context of physical actions (in the context of children playing games, for example).As we mentioned previously, Piaget did not develop an epistemology of communication science as he did with biology, psychology, sociology, physics and mathematics.However, the analogy is applicable to studies concerning discursive action, as illustrated in the research conducted to understand how meanings operate [1]: a person can succeed in identifying a problem and structure it through language but to understand the problem, to lay down an argument (naïve as it might be), to identify the premises and to reflect upon them, and to solve the problem by putting forward an acceptable solution to the argument, requires logical reasoning (reflecting upon the terms of a proposition, formulating hypotheses, re-constructing previous logical systems).In this logical process of solving problems through language (if there is no problem, there is no need to communicate) we also find inferencing, which is normally a tacit process in which the individuals go from meaning to meaning, to draw a valid relationship and re-construct a meaning system.Inferences are, if reduced to their gist, If-Then operations subjacent to discourse.Grize [17] calls this inferencing process one of natural logic.
The distinction between succeeding and understanding, points to the difference between cognitive and metacognitive behavior.Metacognition is an awareness of our own cognitive processes, or the steps to transform a concept, a notion or an idea.Concepts are meanings attributed to objects that operate structurally within a meaning system.Therefore, apple is a concept because it can be related to the ascending category of fruits, the descending category of types of apples, and the horizontal category of other fruits such as bananas, grapes, and oranges.Notions are meanings that operate within meaning systems that can not be part of a structure.Feeling is a notion because although it can be related to meanings such as "emotion" or "human value," it is difficult to structure it clearly.We could broadly distinguish both by saying that concepts are concrete or abstract objects and that no doubts exist about their place within a hierarchy, while notions are objects (normally abstract) whose meanings are not easily structured within a hierarchy.Finally, ideas are concepts or notions that are clear results from creativity and innovation.
The change from previously held concepts, notions or ideas to new ones (or transformed ones) might follow the scientific method (drawing conclusions from data and arguing for the validity of the concept).Alternatively this change may follow "popular" methods, or reasoning believed to be true by the individual for reasons that are psychological (notion or idea).Conceptual change is an intentional and reflective cognitive process leading to higher order learning as opposed to lower order learning which is mainly automatic (such as learning instinctively-or making unaware calculus-in which conditions a person could cross a street).When we exercise some mental procedures to store certain information in our memories that we need to recall (such as learning concepts for an exam that, even if successfully stored in long term memory, are not items likely to be ever used again), we are not necessarily engaging in a higher order learning process.Conceptual change can occur individually or in collaboration (collectively).When it is collaborative, concepts, notions or ideas are changed or transformed in a collective exchange, as is the case of network-enabled asynchronous written discourse processes.
How do I assess collaborative conceptual (or notional or idea) change and (higher order) learning in online discourse when these processes follow one another?I do this by identifying concepts, notions or ideas that are both at the centre and are a result of a hypothetical collaborative process of networked argumentation.In this process, conference participants "build on" the contribution of others using "If this, then that", when explicit or implicit conditionals allow explicit hypotheses formulation or implicit inferencing.The result of this exchange is that participants re-assess and reflect on knowledge, and rebuild previously held concepts, notions or ideas.When collaborative conceptual change occurs, then collaborative learning is very likely to take place too.After assessing a process of conceptual change through the identification of the subjacent conditional operations that makes people change previously held knowledge (re-equilibrating, thus, meanings that guide understanding), collaborative learning can also be assessed.However, collaborative learning can only be achieved if there is evidence in the sequence of exchanges that conceptual change was definitely incorporated in the renewed discourse, either by affirming it or by re-transforming it in the direction of renewed concepts, notions or ideas.
How do I assess knowledge building?A change in concepts, notions and ideas through networked argumentation that become more or less established (stable) in the discourse (thus, collaborative learning) is not evidence of knowledge building.The change has to be profound, i.e. the resulting knowledge must be unique and a truly collective result of many asynchronously interconnected minds, something that an individual could not achieve alone [34].

Presentation of the Method
My method consists of capturing different levels of logical operations that, by revealing the frequency of use (quantitative) and the progressive process (qualitative) through which they were developed, identifies how meaning systems relate in order to verify collaborative conceptual change and learning, and knowledge building.For the reasons presented above, I work with the sentence as the coding unit.However, to understand the relationships of sentences within and between messages, I draw the relationships between meanings (themes) in the messages which are the units of analysis.The reason is that although the cognitive thinking unit of humans is an action (be physical or discursive, or both), the online discourse is built upon messages which are the technological units that comprise the human cognitive thinking unit.
The method follows three steps aiming to integrate (1) logical procedures that reveal the nature of the inquiry that structurally guide the following step [33,45,46]; and (2) instances of arguments (according to the definition presented above) to understand how concepts, notions and ideas that make up thoughts are structured [45].In addition, the method (3) establishes relationships between instances of arguments across messages.
Before coding, all messages of all conferences are organized chronologically with references to how they relate according to topic.Sentences are then clearly identified.Portions of text that do not have a verb, explicit or implicit, are eliminated for coding purposes (say, "Hi there") but they are considered for making sense of the data.The coding procedure has three steps: the first is logical and the second is based on argumentation moves.The coding steps describe the subjacent logic and how it relates to the systems of meanings.Frequencies are used to provide additional information about the trends found.In the third step, the units coded within messages (first and second steps) are used as anchors for establishing relationships across messages to apply the meaning implication analysis [33,45,46], i.e. establishing the relationships among meanings across messages by applying the Piagetian formula of meaning implication [1,2,4].The analysis provides results about collaborative conceptual change and learning, and knowledge building.Hereunder the reader will find a more detailed explanation of the three steps.
The first step (1) consists of identifying the logical operations underlying the discourse such as affirmations, negations, conditionals ("if-then"), conjunctions ("and") and disjunctions (either-or, and oror, inclusive and exclusive).My previous studies have shown that, in line with the hypothesis presented above, conjunctions and affirmations do not usually trigger a more engaging conversation and they are very rarely used in common networked argumentation processes.On the other hand, negations and conditionals create friction and promote further thinking upon the subject of conversation [33,45,46].To disambiguate the analysis, an order of prevalence was established: (1) Conditional, (2) Negation, (3) Disjunction, (4) Conjunction, and (5) Affirmation.If a sentence has a conditional and a negation, it will be coded conditional.If a sentence has a negation and an affirmation, it will be coded negation.And so forth.This technique has resulted in blind inter-coder reliability rates higher than 95%.
The second step (2) consists of identifying the main components of arguments: a. Claim: introduction of a contextual situation that expresses concerns or difficulties concerning the practice or beliefs held by the writer, affirming something b.Data: introduction of facts, statistics, scientific data, research results or other works that have an influence on the practice and would support the claim, or else psychological reasons for standing for an idea c.Hypothesizing: engaging in a process of hypotheses formulation that provide possible explanations for a claim put forward that is consistent with the data, or else questioning somebody questioning response to is an inverted hypothesis: A question such as «Can anyone explain why nobody answers my question?» has an inverted hypothesis which is "If nobody answers my question and I do not know why, then if I ask the others perhaps they could explain"; however, a question such as "What the hell are you writing here?",should be seen as a declaration that has an exclamatory intention.
These categories were inspired by those identified by Toulmin [37] although we do not adopt his method of analyzing arguments for reasons that are both epistemological and practical.Epistemologically, [37] adopts a strict empiricism which is inconsistent with cognitive data showing that mental procedures precede declarations.On a practical level, his analysis is fine grained, and we have found that studies on electronic conferencing data need to be simple in order to be feasible.However, Toulmin's contribution in weaving informal logical reasoning with practical interests, and that logic should be understood as a mental tool enabling to make sense of practical contexts, is enormous.For this reason, I incorporated some of his ideas into another epistemological framework of analysis.The goal of this step is to understand how people structure conversation in order to present and make sense of concepts, notions, and ideas that are being shared in electronic conferencing.
Coding arguments is difficult, even after simplifying their instances.It is particularly difficult to distinguish a claim from data.In contrast to the first step, which is very rigid, here it is meaning that guides categorization.In spite of this difficulty, we have reached blind inter-coder reliability rates higher than 80%.When the content is "easier" for the coders, rates surpass 90%.
The third step (3) consists of defining the main meanings that are central to the networked argumentation.
To illustrate, if we re-assess the example of the meaning system surrounding the word "bua-bua" presented in the section "The critical problem of the unit of analysis", (see above), what are the meanings that are central to it?Evidently, they are "bottle", "being hungry", and "communicating".Grize explained to me, in a personal communication, during the delivery of the 1996 spring course "Natural Logic and Language" at the Graduate Program of the Department of Social Psychology of the Institute of Psychology of the University of Sao Paulo that there are two main challenges for discourse analysis researchers.The first one is that all methods need a formal framework to structure the object of study.Since meanings are rather fluid, a formalizing system should be adopted in order to minimize the fluidness.Grize himself [17] developed natural logic as a structure from which he looks at the discursive phenomenon.Secondly, minimizing the fluidness of meanings is not sufficient because multiple meaning paths can be created in configurations of meanings emerging from discourse, resulting in completely different interpretations.Grize and Pierault-Le Boniec [47] identify a conceptual tool to address this problem, where concepts, notions or ideas central to a given text are arbitrarily selected, and then proceed by analyzing only the meaning system related to an arbitrarily chosen theme.For example, let's take news broadcast immediately after the September the 11 th attack.The broadcasts addressed terrorism, the war on America, and security (failure / need).When studying such a text, the researcher needs to choose a theme (meaning system) between terrorism, war on America or security in order to develop a consistent analysis.
In this step, and because we take as a general principle based on previous research that negations and conditionals lead to more engaged networked argumentation (Step 1), and that hypothesizing is the gist of human cognitive thinking (Step 2), we identify which themes are in line with those operations of Steps 1 and 2. Next, we arbitrarily choose one of those themes to study the progression of networked argumentation.To pursue the analysis, we apply a method developed in recent years [33,45,46] aiming to understand the relationships between the meanings related to themes, i.e. how meanings are carried through (logical) implications.Implications among meanings are expressed by the formula: "if a meaning B is part of a meaning B which is part of a meaning A, then A implies C in terms of meanings" [1].
For this last step, because it is analytical, just one essay of inter-coder reliability was completed.However, given the continuous process of method development, I plan to introduce an inter-analyzers reliability procedure in the future.We are developing new tools not only to refine the old, but also to study online affectivity and ethics under new project grants provided by the SSHRCC-Social Sciences and Human Research Council of Canada and the FQRSC-Fonds québécois de recherche sur la société et la culture.

A. Introduction
In order to demonstrate the method, I present an application in a networked community supported by Knowledge Forum conferencing system.First, I present the context of the building of the networked community, and essential features of the software that had an impact on interaction.Secondly, I provide some basic information about the data collected.Thirdly, I present the results emerging from my methodology.

B. Research Context
The research was carried out in collaboration with the Order of Nurses of Québec (OIIQ) who had as partners the Centre for the Informatization of Organizations (CEFRIO), as well as a number of Canadian hospitals from the provinces of Québec, Ontario and New Brunswick.The Order of Nurses of Québec was interested in promoting ways for nurses with expertise in heart care to share knowledge as a means to advance nursing practice.During a period of six months, this pan-Canadian networked community of practice comprised of 34 French-speaking nurses, engaged in what we verified as being a problemsolving and knowledge building process.Their goal was to develop ways to improve nursing practices using networked communication technology [48].The software used for communication was Knowledge Forum.

C. Conferencing System Used
Knowledge Forum is a conferencing system developed by cognitive psychologists with the goal of enabling knowledge building [12].The version used by the community allowed the participants to access the database either through the web or through a client program installed on the user's local computer.Most nurses accessed the database using the client, which has an added benefit of graphical displays to represent the tree-like structure of the community discourse as a colourful web of nodes.In addition to features that are common to many conferencing systems (like key-wording, word search), Knowledge Forum has some others that make it unique.It is our belief that those features had an impact on the way nurses communicated and worked together.
Here are examples of tools that were important for the nurses' interaction: a) Editing -users can edit their messages even after they were posted.Different colours signal when a message is new, read, or edited b) Annotation -users can annotate messages through the creation of messages within messages.
The difference between a message and an annotation is that the latter does not have the other features bound to the messages such as key-wording, problem identification, scaffolding, etc. (it is just a text box);  d) Scaffolds -users can build and insert tags within the text.Scaffolds allow users to categorize their own thinking.A number of scaffolds related to instances of argumentation were negotiated between the researcher and the nurses, in line with an approach to the participatory design of communities [49] in order to enable discourse structuring.Those tags were: a. Problem (claim): same meaning as in "Presentation of the method", Step 2 (presented above) b.Data: same meaning as in "Presentation of the method", Step 2 (presented above) c.Envisaged solutions: hypothesizing (partial meaning of "hypothesizing" introduced above in "Presentation of the method", Step 2) d.Questioning: formulation of interrogations or conversed hypotheses (partial meaning of "hypothesizing" introduced above in "Presentation of the method", Step 2) e. Opinions: offering judgments concerning claims, data, questioning or envisaged solutions presented to explicitly react to others e) Rise-above -users and/or facilitators are able to synthesize a number of messages that are related by meaning, packaging them within an upper folder, i.e., they are able to create a subfolder within the main forum.The Rise-above tool allows users to organize and group messages.The icons are similar to messages but with a slight variation.In the figure above, the rise-above folders are at the bottom of the image.
It is worth noting that the nurses of the community not only wrote messages but actively used other tools.For example, there is approximately the same number of annotations as messages.

D. Data
The entire database consists of 545 messages, approximately the same number of annotations, and nine conferences.For the purpose of this study, we chose one excerpt from each of two conferences: one in which problems related to heart care practices were identified (16 messages corresponding to 11.51% of the 139 messages of the conference), and another one in which nurses worked together to prepare deliverables to address the problems identified (19 messages corresponding to 13.29% of the 143 messages of the conference).In the first excerpt nurses explored the difficulties of engaging patients in the prevention of heart failure by encouraging them to share the responsibility for their treatment, and by participating in the development of nursing strategies that could help their condition.In addition to the messages, nurses added 19 annotations.In the second excerpt, the nurses prepared a teaching instrument to be handed to patients to help them to control symptoms and signs of heart failure and therefore to enable auto-surveillance.In addition to the messages, nurses added 36 annotations.
The criteria to choose these two conferences were (1) the fact that nurses discussed what they considered to be important issues in heart care in the first conference, which was the discussion start-up, and (2) that the issues identified in the first conference were discussed more deeply in the second.It was in this second conference that a heart care kit was conceptualized and produced in order to be broadcast through the website of the Order of Nurses at http://www.infirmiere.net/nouveau_infvir/contenu/sante_coeur/index.htm to help the public to take charge of their own heart health.It is important to note that there is continuity between the selected excerpts of the first and the second conferences.We limited the number of messages to be studied due to the extension and complexity of analyzing hundreds of messages, and because studying just a portion of the database was enough to identify conceptual change, collaborative learning and knowledge building.

E. Descriptive Analysis
In the thread of the first conference, 131 sentences (judgments) were identified.In the thread of the second conference, 298 sentences were identified.Only conference messages texts were coded.Although nurses also wrote a significant number of annotations, few of them could stand as "messages".Most annotations were just manifestations of agreement, with texts such as: "I like this scale: it is simple and it can be used by everybody".In addition, Knowledge Forum does not show when a given annotation was written (hour and date).This technicality creates a real problem when the researcher seeks to understand the progression of communication because chronological information is needed.In order to incorporate the richness of those contributions, the content of the annotations is considered when identifying the themes step 3.

Step 1
In the thread of the first conference, affirmations respond for 71%, conditionals for 9%, negations for 3%, and ambiguous sentences for 17%.In the thread of the second conference, affirmations respond for 42%, conditionals for 16%, negations for 1.5%, disjunctions for 0.5% and ambiguous sentences for 40%.After excluding ambiguous sentences, in the first thread affirmations respond for 85%, conditionals for 11%, and negations for 4%.In the thread of the second conference, affirmations respond for 71%, conditionals for 26%, negations for 2% and disjunctions for 1%.
Ambiguity was high because in the second conference the nurses were building a heart health kit (see meaning implication analysis, step 3), and either (1) those phrases were "verbless" such as words or group of words presented in separated lines presented as they (like "insomnia", "loss of energy", etc.) or (2) those phrases were verbatim copied from another message written in another conference and pasted in a new message using Knowledge Forum tool "quoting".

Step 2
In the thread of the first conference, claims respond for 18%, data for 69%, hypothesizing for 12%, and eliminated sentences for 1%.In the thread of the second conference, claims respond for 4%, data for 50%, hypothesizing for 11%, and eliminated sentences for 35% due to same ambiguity reasons presented above.If we exclude the eliminated sentences from the calculation, in the thread of the first conference claims respond for 18%, data for 70%, and hypothesizing for 12%.In the thread of the second conference claims respond for 7%, data for 77%, and hypothesizing for 16%.
Inter-coder reliability based on Miles and Huberman [50] procedure (two coders) was achieved with the following results: (1) for step 1 -89.31% in the first conference, and 95.97% in the second conference, medium of 92.64%; (2) for step 2 -88.55% in the first conference, and 95.30 in the second conference, medium of 91.92%.Coding criteria, as explained above, were established by applying strict definitions for the categories and a scale of category prevalence based on operational logic [3] for use in ambiguous cases.I use the word "operational" with the meaning of the French word "opératoire" as employed by Piaget [3].

F. Meaning Implication Analysis: Step 3 1. Thread of the first conference (At the heart of our exchanges)
The first forum discussions were triggered by a message presenting the following statements: A. Patients should take charge of their own health (under scaffold "problem") and the hypothesis.
B. If patients have to take charge then nursing strategies should be adopted (under scaffold "questioning").
Around this pair of statements, five main themes emerge either from the claim ("Best models in heart readaptation", "Convincing", and "Rewards") or from the hypothesis ("Contracts" and "The pair counseling/teaching instrument on heart care") in the threaded messages:

a. "Contracts" theme
In the 7 threaded messages of this theme, most discussions were triggered by sentences coded hypothesizing (formulating hypotheses related to envisaged solutions).The themes were explored in order to find ways to encourage patients to agree on taking care of their health through contracts: Should it be mandatory?How could a contract engage the ill?Is a contract a real solution?Is a contract motivating?Is a contract an instrument of awareness?What would be the role of doctors in such a contract?Would the role be controlling or engaging?How critical are the implication for doctors if such an idea was to be adopted?The responsibility of doctors was further explored by a "thread" of annotations built around those messages.
C1 by JL -If strategies are needed then why not request patients to sign a contract?(under no scaffold).C2 by HB -At Hospital X patients sign a contract (under no scaffold).C3 by YJ -If there is a contract, then there is a solution for engagement (under scaffold "envisaged solutions").C4 by FB -In my hospital there are agreements that do not work always: provided this, then a contract will have as effect that patients take charge (under no scaffold).C5 by GB -I am ambivalent because if changes should be made, then awareness would be as beneficial (under scaffold "questioning").C6 by LB -In our clinic, we require registration to oblige awareness.However, this does not change life habits (under scaffold "envisaged solutions").If this is so, then would a contract help or be a source of culpability for a patient incapable to adjust?(under scaffold "questioning").C7 by FB -Our contract is done on the basis of what habits the person is prepared to change within 12 weeks.At the end, we assess what has been achieved with the patient (under no scaffold).If a person changes because of feelings of culpability, then this person will not be able to sustain the changes for 12 weeks (under scaffold "opinion").
Application of the meaning implication formula: MI_1 = B (if patients have to take charge then nursing strategies should be adopted) → C7 (the contract that is done on the basis of what habits the person is prepared to change within 12 weeks, with an assessment of what has been achieved with the patient would be a solution because a person does not change because of feelings of culpability but of what this person is able to sustain (awareness) within 12 weeks).

b. "Best models in heart re-adaptation" theme
In the 2 threaded messages representing this theme, most discussions were triggered by statements made on sentences coded hypothesizing.Many models of heart re-adaptation were presented as well as proposals of integrating many models or adopting one of them.This theme was addressed extensively in the discussions surrounding the following themes "Convincing" and "Rewards." BM1 by CB -(referring to different models of heart rehabilitation, particularly to one conceptualized by Pender) My experience is that we have to take the best of each model (of patient responsibility) (under no scaffold).BM2 by HB -I agree that we should take the best of each model and the Pender model put together all components of the previous ones (under scaffold "opinion").

Application of the meaning implication formula: IF [A] THEN [BM1 → BM2] THEN IF [A] THEN [BM2]
MI_2 = A (patients should take charge of their own health) → BM2 (we should take the best of each model and the Pender model put together all components of the previous ones).

c. "Convincing" theme
One single message presented this theme as data, through the presentation of a case.A configuration of annotations was built around this message to hypothesize which "x" element would motivate and convince a patient of the need to take care of heart health, and how to understand resistance to care.
CT by FB -Patient C had an infartus and participated for the third time to the "agreement" program to learn how to take charge of his/her health because he/she was not convinced (under scaffold "data").Why do some people not understand?(under scaffold "questioning").If the person perseveres, then there is always hope (under scaffold "opinion").

Application of the meaning implication formula: IF [A] THEN [CT]
MI_3 = A (patients should take charge of their own health) → CT (by the presentation of a case we come up to the conclusion that it is always possible to succeed).

d. "Rewards" theme
This theme, represented in 4 messages and a "universe" of 11 annotations, was built around the discussion of cases and the need to provide "rewards" that would serve as a motivating element for the ill to take charge of their own health.Annotations were used to illustrate the use of rewards and comments.Here, ideas were equally triggered by statements coded as hypothesizing as well as data.This theme originated discussions around type of rewards, the implications of rewards, how to motivate patients, psychosocial factors involved, as well as strategies to enable this "technique" and models of behavioral change that should be adopted.
R1 by YJ -A case of diabetes in which a hopeless patient ends up to believe that after losing weight his/her life could change is presented (under scaffold "problem"), and if the motivation helped him/her to lose weight, then people need a reward (under scaffold "envisaged solutions").R2 by PL -How then we could use rewards?If we use it with another patient will we prove its pertinence?(under scaffold "questioning").R3 by JH -Putting together different models would be an interesting solution (under scaffold "opinion"), and I propose the following data on motivation factors in behavior change (follows copy of a course notes) (under scaffold "data").R4 by JH -Here are a number of models to help us thinking (under scaffolds "envisaged solutions" and "data").
Application of the meaning implication formula: patients should take charge of their own health, and if is so, then if patients have to take charge, then nursing strategies should be adopted) → R4 (a number of models top help us thinking).

e. "The pair counseling/teaching instrument on heart care" theme
One message with two annotations is built around an envisaged solution according to which teaching instruments, such as a guide of auto-surveillance, might be considered as a type of counseling.
PCT by GB -Changing one's behavior is complex (under scaffold "problem").Data show that most patients that go back home are unable to follow instructions (under scaffold "data").If we consider this then I think that a heart health kit would be an innovative solution (under scaffolds "opinion" and "envisaged solutions").

Application of the meaning implication formula: IF [B] THEN [PCT]
MI_5 = B (if patients have to take charge, then nursing strategies should be adopted) → PCT (a heart health kit would be an innovative solution).In other words, the problems discussed and the hypotheses formulated led to a concrete proposal: one of building a heart health kit.This nursing tool will then be built on the second thread, whose analysis follows.

Thread of the second conference (Heart health kit)
Discussions in the second conference were triggered by a message from the facilitator presenting the following hypothesis subjacent to two questions: C -If an instrument should be built to enable patients' auto-surveillance, then what signs and symptoms should be addressed, and which actions should be taken when they identify them?(under scaffold "questioning").Linked to this message, we find four themes ("Building the instrument: calendar", "The pair priority information and scale", "Learning auto-surveillance", and "Symptoms"):

a. Building the instrument: calendar
In the 4 messages of this thread, an extremely careful attempt to build the instrument enabled a discussion about the adequacy to create a calendar that would serve as organizer of the auto-surveillance process of patients with heart insufficiency.Most messages' hypotheses (hypothesizing) were followed by claims and data.
BI 1 by MJ -If we have the content as we do, then it needs to be included somewhere: a calendar to be given to the patient could structure his/her action and makes him/her to take notes of weight, amount of liquid drank, pressure, medication taken, telephone help numbers as well as be useful for medical personnel (under no scaffold).BI 2 by CV -(A full list of symptoms to be watched is produced to be watched on a monthly basis): if we focus on self-management and the importance of the patient to take charge of his/her own health then this tool will be very useful (under no scaffold).BI 3 by SH -This idea is excellent but if it is prepared on a monthly basis then old people will not be able to read this whole list written on a single page (under no scaffold).BI 4 by MJ -Information should be limited to the essential (a full list is presented along with a calendar format proposal) and if this is the case, then the calendar should be monthly but with a page for each week to help old people to read, as well as images to guide those who do not read and write (under scaffold "opinion").(information should be limited to the essential and the calendar should be prepared so as to help old people to read as well as to guide those who do not read and write).

b. The pair priority information and scale
In these two messages, surrounded by six annotations, nurses listed priority information for the autosurveillance instrument and hypothesized which, and how, scales would enable patients to evaluate their own health.Claims, data, and hypothesizing were equally balanced in the messages and annotations.
PPI 1 by MJ -The problem is to help the patient with heart insufficiency to take charge (under scaffold "problem") and if we include the information in a kit then the patient will be able to recognize signals and symptoms of a possible heart failure (under scaffold "envisaged solutions") and then, here is the list (the facilitator lists signals and symptoms) (under scaffold "data"), and if so then it would be interesting to provide an evaluation tool along with indications and recommendations for the patient (under scaffold "opinion").PPI 2 by YJ -(a full form is proposed including all elements discussed beforehand by the others) (under no scaffold).

d. Symptoms
A message-synthesis including 8 new messages and a number of annotations combined messages whose main themes were signs and symptoms, their definitions, types, occurrence, etc.Few messages included in the three themes presented above ("Building and instrument", "The pair priority and scale", and "Learning auto-surveillance") that discussed signs and symptoms are present in this Rise Above.Most messages present data to be included in the tool kit.

A. Collaborative Learning and Knowledge Building
The first conference quantitative data reveals that most sentences were affirmations, although the number of conditionals was not impressive and that of negations unimportant (step 1).Crossing them with instances of arguments (step 2), it is not surprising to find that claims and data to support claims were prevalent.The level of hypotheses formulation was modest (shown by the dyad conditionalshypothesizing).Although hypothesizing is an indicator that higher order thinking processes are under way, the analysis of the transcripts shows that the nurses were careful in their hypotheses formulation.Due to their strict scientific formation, they did not rely on guesses, even informed ones, to formulate hypotheses.Their work follows strict scientific rules and they kept their focus on how practical experience could add to the scientific knowledge they apply regularly in their activities.Therefore, they searched for reliable information (shown by the dyad affirmations/negations/disjunctions-data) to make their point.However, the meaning implication analysis (qualitative), which is anchored in explicit and implicit conditionals/hypotheses, demonstrates that the nurses did formulate alternative hypotheses with regards to the inconsistency of many methods to account for the process of people with heart insufficiency and which model would be the most appropriate.In addition, they discussed practical problems related to method application, developing many concrete proposals such as the contracts, the need to convince the ill to take charge of their own health, the role of rewards on this process, and, the most important of all, the need to build a heart health kit to be made available to the ill and medical personnel.
The thread of the second conference is a continuation of the first.This continuum is a problem solving process: nurses moved from a phase of problem identification to a phase in which they worked on a deliverable that resulted from the discussion (the heart health kit).The second conference quantitative data reveals that although most sentences were affirmations, the amount of conditional reasoning was high, followed by a negligible number of negations and disjunctions.The argumentation process was built upon a lesser number of claims but upon an increased number of data and hypotheses.Here, I found that although the thread of the first conference was created to explore and to identify problems, the level of suggested inquiry (shown by the dyad conditionals-hypothesizing) was lower than that of the second conference thread, in which nurses worked collaboratively on the building of a teaching instrument for patients (the heart health kit).The meaning implication analysis (qualitative) demonstrates why hypothesizing was higher in the second conference, in which the nurses worked and agreed on a deliverable that will be related to patients' self-surveillance.Building a heart health kit to the public is a very serious activity: the nurses were extremely careful in the discussion and reflected on each item that should be covered.Items were checked and re-checked to verify their scientific soundness, pertinence, appropriateness, and usefulness.In addition, at the end of discussions, a validation process was carried out.Hypotheses were frequently formulated to question whether an item or aspects of it were scientifically sound, pertinent for inclusion in a heart health kit, appropriate for the ill, and useful, i.e. practical either for the patient or for the medical personnel responsible for him/her.
It is possible to understand the knowledge building process and whether or not collaborative learning took place by looking at the resulting application of the meaning implication formula in order to check meaning continuity between the chosen threads of the first and the second conferences.

Meaning implication analysis of both threads of conference 1 and 2 IF [IF MI_1 THEN MI_5] =
[A & B (patients should take charge of their own health and if is so then if patients have to take charge then nursing strategies should be adopted)[ → [PCT (a heart health kit would be an innovative solution)]

THEN [[IF MI_6, THEN [MI_7 & SYMP]]
[C (if an instrument should be built to enable patients' auto-surveillance then what signs and symptoms should be addressed and which actions should be taken when they identify them?)] → [L4 (To avoid crisis the patient must learn to self-assess his/her health.If so then the elements to be assessed are (nurse lists them through a new form proposal for the kit tool, annotated extensively by the other nurses)] & [SYMP (symptoms are listed and thoroughly discussed to be included in the kit)].

IF SO, THEN IF MI_1 THEN [MI_7 & SYMP]
How do I assess collaborative conceptual (or notional or idea) change and (higher order) learning by using this method?In the sequence of exchanges, the nurses reflected on the identified themes, resulting in the integration of a new set of meanings into their practices.This renewed discourse allowed them to affirm knowledge already known and to transform concepts (such as models), notions (such as selfsurveillance), as well as ideas (the heart health kit).This method provides information and operational relationships among meanings expressed either through concepts, notions or ideas as clear evidence that there was a process of collaborative learning in the networked community of nurses.
How do I assess knowledge building?It is evident from the data presented that through argumentation the change in the ensemble of concepts, notions and ideas (meaning configurations [33]) became more or less stable (process of equilibration).There were opportunities for the nurses to construct and re-construct their understanding.However, this progressive co-constructive process led to the co-elaboration of the heart health kit, a concrete object of knowledge.Although deliverables are not, per se, evidence of knowledge building, the nurses were able to co-construct a unique product resulting from truly collective minds.In addition, the reasonably strong hypothesizing process apparent in the data supports the high level of conceptual, notional and ideal constructions and reconstructions found in the online discussions.

B. Value and Limits of the Method
As we saw in the section "Transcript analysis methods", studies on progressive networked discourse are recent, exploratory, and largely unreliable.This reliability issue touches on a number of different aspects such as epistemological, theoretical and methodological coherence, the adoption of adequate units of analysis, and inter-coder reliability procedures to enable replication of at least the formal dimension of discourse (in the case of my method, a guide was developed for the coders to explain the way form is revealed through logical operations and providing definitions of the instances of argumentation).My method has a number of advantages and limitations.Among the method advantages, I list the following: a) It is epistemologically and theoretically sound because all levels of analysis result from a coherent system; b) It adopts a unit of analysis that is consistent with the epistemology and theory used; c) It allows a high level of inter-coder reliability on steps one and two, respectively the identification of subjacent logical operations and basic instances of networked conversation (argumentation); d) It integrates quantitative data to the qualitative analysis of the progression of networked communication; e) It results in reliable analyses of networked collaborative learning and knowledge building processes However, there are shortcomings that should be recognized and addressed.First of all, it should be noted that there are innumerable ways of doing discourse analysis, and that each method addresses different questions.Goals vary enormously and obviously a method designed to address one specific set of problems might not be suitable for addressing another set.I would like to clarify some aspects of this method that might emerge from the reading of this article.
The first one relates to its use.It is understandable that professors and teachers are (still) expecting conferencing tools that will enable them to assess learning for evaluation purposes.It seems that some researchers developed methods to assess learning in the search for practical evaluation tools.My method has no intention whatsoever, at least at this point, to provide evaluation tools for educators, although it assesses processes that are important for pedagogy.
The second aspect relates to its goals.The method provided a tool to assess the level of cognitive communication in networked processes (related to the construction of scientific as well as popular knowledge).The outcome indicates which knowledge level a given online community is or could reach (broadcasting or informational, collegial or cooperative, interpretive or collaborative; cf. in [15]).It is, thus, a research methodology based on a scientific epistemology (genetic) developed within the interests of communication sciences.Because of the contemporary extended use of networked cognitive communication, there is a natural interdisciplinary overlap with cognitive sciences, education, and managerial sciences.Anything that goes beyond the designed goals cannot be assessed by this method.
The third aspect relates to its application.This method is being used to the study of networked communities built by both educational and professional organizations.Its goal is to understand networked knowledge building processes in order to identify the role of the conferencing systems used as well as the strategies of participation of facilitators and users.The objective is to come up with some understanding of the cognitive procedures around networked communication processes and draw recommendations for networked community building and development.However, its application is difficult.I do not rely on regular discourse analysis software or on computerized text analysis but I consider using some kind of partial automatic procedure in the future.
The fourth aspect relates to inter-coder reliability.The method achieved a very consistent level of intercoder reliability in the two first coding steps.Our inter-coder reliability procedure is the following: two coders blindly code the conference transcripts (i.e.without knowledge of each other's coding).
Concerning the second step (argumentation), it should be noted that the way Knowledge Forum was used was problematic: the scaffolds used by the nurses might had an influence on the way the coders coded the sentences.Independently of this shortcoming, normally, results were above 80%.When results did not reach that level, a third coder was requested to code the transcripts, and then I took the highest inter-coder agreement.However, no inter-coder reliability procedures are being developed with regards to the third step, the meaning implication analysis.Given that this step is based on meanings and meanings cannot be formalized, it is very difficult to achieve inter-coder reliability.Because the Piagetian formula is based on the partiality of meanings (theme) that could be perceived in a sentence but, at the same time, formally structured, I have not abandoned the idea to structure an analytical procedure in such a way that at least something between 60% and 80% of inter-coder reliability could be achieved.
In spite of these limitations, I believe that this method is a step forward in the search of reliable research tools for the analysis of networked communication processes.

VI. CONCLUSION
The method presented in this paper enables the assessment of conceptual change, collaborative learning and knowledge-building through the study of networked cognitive communication.It enables researchers to verify logical instances of knowledge building with a view to capturing progressive communication through argumentation processes in electronic conferencing.When people use language on a daily basis, they naturally structure their thoughts.I explain this natural structuring as follows.According to Piaget [44], logical operations abstractly express the fundamentals of reasoning.Simply put, an idea is or is not (A -affirmation, as opposed to Ã -negation), an idea can be connected to another one (A ∧ Bconjunction), an idea can be put in a situation either-or in exclusive or inclusive ways (A ∨ B), and an idea can implicate another one (A→B).On top of the logical structures that underline the thinking processes that most people are unaware of, meanings are carried out.The interconnection between logical structures and progressive written discourse allow collaborative conversation (argumentation) to emerge.Therefore, instances of argumentation such as claims, data, and hypothesizing were naturally integrated as formal procedures into the nurses' online conversation.
The nurses engaged in an ill-defined problem solving process.A problem implies a hypothetical structure that needs human inferring (consistent use of conditional reasoning through hypotheses formulation or inferences) to be solved.The nurses participated, engaged in reading and writing, formulated hypotheses and made inferences about what others meant in the messages.Conditional reasoning underlying hypotheses formulation or inferencing is If-Then operations.That is, if a given A is written by X then the reaction of Y to X will be B, and this reaction will be intentionally done through a response.If W reacts to Y by intentionally responding C, then this process of networked hypotheses formulation or inferencing (or both) could be expressed by the notion of meaning implication.If B leads to C, and given that A led previously to B, then A implies C in terms of meaning because part of the meaning of C can be found in B, and part of the meaning of B can be found in A.
The method introduced in this paper is a contribution to those researchers who are working with stacks of conference transcripts willing to understand how communication between participants progressed.It has its limitations but I consider that the advantages outweigh them.By anchoring conceptual change, learning and knowledge building in the hypotheses formulation/inferencing concrete elements, the method offers a sound basis for analysis.Logical forms can be clearly identified as well as logical relationships subjacent to meanings, concluding that this method enables the researcher to go beyond content analyses.It is a new proposal of discourse analysis, based on cognitive theory as applied to networked communication.

Figure 1 -
Figure 1 -The editing tool provides visual cues for the reader when a message is modified.

Figure 2 -
Figure 2 -The annotation tool allows users to insert comments related to the messages.Small yellow icons open annotation windows when clicked.

Figure 3 -
Figure 3 -The quoting tool allows users to grab and drag texts from the original message to another one.After dragged, it is inserted between quotes and in italic.

Figure 4 -
Figure 4 -The scaffolding tool allows the creation of tags indicating an action.The text is placed between yellow brackets.

Figure 5 -
Figure 5 -The Rise-above tool should take charge of their own health and if is so then if patients have to take charge then nursing strategies should be adopted)] → [C7 (the contract that is done on the basis of what habits the person is prepared to change within 12 weeks, with an assessment of what has been achieved with the patient would be a solution because a person does not change because of feelings of culpability but of what this person is able to sustain (awareness) along 12 weeks)] ¦→ [CT (by the presentation of a case we come up to the conclusion that it is always possible to succeed)] → [R4 (a number of models top help us thinking)] → [BM2 (we should take the best of each model and the Pender model put together all components of the previous ones)] → [PCT (a heart health kit would be an innovative solution)], THEN IF MI_1, THEN MI_5.
Four messages surrounded by 23 annotations (19 related to the fourth message) discussed extensively different aspects of the instrument such as different kinds of scales, their advantages and disadvantages, useful suggestions, how to address the vulgarization of scientific information, symptoms such as fatigue and increased weight, among others.The discussion was mainly constructed around data presentation but claims and hypothesizing were equally important in messages as well as in annotations.LA 1 by YJ -(the nurse summarizes the functioning of the heart pumping system and related symptoms) (under scaffold "data").LA 2 by LJ -According to the gravity of the patient, symptoms can have different levels of intensity: if so, then a scaling tool should be provided to the patient to enable him/her to evaluate the gravity (under no scaffold).LA 3 by CG -(the nurse presents technical parameters to help the others recognizing that he/she did not read the messages) (under scaffold "data").LA 4 by LL -To avoid crisis the patient must learn to self-assess his/her health (under scaffold "problem").If so, then the elements to be assessed are (nurse lists them through a new form proposal for the kit tool, annotated extensively by the other nurses) (under scaffold "envisaged solutions").

IF MI_6, THEN [MI_7 & SYMP].
C] THEN [SYMP] MI_8 = C (if an instrument should be built to enable patients' auto-surveillance, then what signs and symptoms should be addressed and which actions should be taken when they identify them?) → LA 4 (symptoms are listed and thoroughly discussed to be included in the kit).(if an instrument should be built to enable patients' auto-surveillance then what signs and symptoms should be addressed and which actions should be taken when they identify them?)] → [BI 4 (Information should be limited to the essential (a full list is presented along with a calendar format proposal) and if this is the case, then the calendar should be monthly but with a page for each week to help old people to read as well as images to guide those who do not read and write)] → [PPI 2 (full form is proposed including all elements discussed beforehand by the others)] → [L4 (To avoid crisis the patient must learn to self-assess his/her health (under scaffold "problem").If so then the elements to be assessed are (nurse lists them through a new form proposal for the kit tool, annotated extensively by the other nurses)] & [SYMP (symptoms are listed and thoroughly discussed to be included in the kit)], THEN