What do machines do for us




















This view is consistent with their experiences of most powered mechanical devices e. Although most students will have common experiences of the use of simple machines like levers and pulleys, few will have any understanding of why their design may provide an advantage or how they should be best employed.

Many students also have difficulty in identifying or explaining these experiences to others and rarely identify parts of the human body, such as the arms or legs, as composed of levers. The word machine has origins in both the Greek and Roman languages. The basic purpose for which most simple machines are designed is to reduce the effort force required to perform a simple task. To achieve this, the force applied must act over a longer distance or period of time resulting in the same amount of work being performed by a smaller force.

Screws, levers and inclined planes are designed to increase the distance over which the reduced force acts so that we can push or pull with less effort. Consists of a stiff beam that rotates around a fixed pivot point fulcrum located somewhere along the beam. Motion at one end of the beam results in motion at the other end in the opposite direction. The location of the fulcrum can magnify or reduce the force applied at one end at the expense or advantage of the distance over which the other end travels.

It is often used to split, cut or raise heavy objects depending on the angle of the sides of the wedge. Combines a wheel with a central fixed axle which ensures that both must rotate together. A small force applied at the edge of the wheel is converted by rotation to a more powerful force at the smaller axle. This effect can be reversed by applying a large force to the smaller axle resulting in a smaller force at the edge of the larger wheel with much greater rotational speed.

The rotation of a threaded shaft can be converted into movement in either direction along the axis of rotation depending on the direction of its spiral thread. They are commonly used with gears or as a fastening mechanism. Is commonly used to raise or lower heavy objects. However, there is some discord regarding the exact understanding of simple machines. Some engineers, for instance, call gears the seventh simple machine.

Physicists classify simple machines into two big categories: levers and gears. Regardless of the many different classifications, there is one common thread: all simple machines make work easier.

One way to measure the magnitude by which simple machines make work easier is through calculating mechanical advantage. Students are challenged to complete A Simple Solution for the Circus activity to relate simple machines and efficiency given several constraints. There is one unifying concept behind mechanical advantage for all six simple machines, but unfortunately, they each are calculated differently. In the following two lessons of this unit, students will learn how to calculate mechanical advantage for each machine.

The overarching theme is that we want to know how much less force is needed to do the same amount of work. The mechanical advantage number see equation below is the ratio of force applied without a machine to the force applied with a machine to do a particular amount of work. In this lesson, when calculating work and mechanical advantage, we use metric units. There are three units of measurement needed throughout the Simple Machines unit.

Force is measured in units of newtons. These units are named after Isaac Newton who is considered the father of classical mechanics; i. The other unit is the meter , which has Greek and Latin origins. The final unit we will be using describes the product of newtons and meters which are joules. These units were named after a 19th century physicist James Prescott Joule who studied heat and related this phenomenon to energy. Interestingly, heat, energy and work all use the same units of measurement: joules.

Watch this activity on YouTube. What are some simple machines in this classroom? Example answers include: The doorstop is a wedge , a desktop that opens is a lever , screws hold our chairs together and a pulley might move our blinds up and down.

What is work? Answer: Work is the energy it takes to move an object a certain distance. The equation we will use for work is: force multiplied by distance. Why are engineers interested in simple machines?

Engineers use the concepts of simple machines to invent many mechanical devices to improve everyday challenges. In addition, various engineering tasks can be completed or more easily accomplished through the use of simple machines.

Essentially, simple machines help improve society through making life's tasks much easier. Six simple machines can be found in many everyday items: screw, lever, axle and wheel, pulley, wedge and inclined plane. What does it mean to do work? Students should offer different examples; e. Have you ever wondered how the Egyptians built the pyramids? Answer: They built large ramps, or inclined planes, and slid the massive blocks up to their desired position.

Who do you think came up with that idea? Answer: An engineer, but they might not have called them one during ancient times; a person who is much like an engineer of today. Drawing Race : Write the six simple machines on the board screw, lever, wheel-and-axle, pulley, wedge and inclined plane. Divide the class into teams of four, having each team member number off so each has a different number, one through four.

Call a number and a simple machine. Have students with that number race to the board to draw the simple machine. Give a point to the team whose teammate first finishes the drawing correctly.

Send-A-Question : Ask each team of four students to name themselves according to one engineering discipline i. Each student on a team creates a flashcard with a question on one side and the answer on the other.

Note: If the team cannot agree on an answer they should consult the teacher. One team member goes to the next team, e. Team Chemical Engineers attempt to answer the questions. There should be more than two teams. If students feel they have another correct answer, they can write it on the back of the flashcard as an alternative answer. Once all teams have tested themselves on all the flashcards, regroup and clarify any questions.

Using the Equations: Ask students to solve the following problems using the equations from the lesson:. Have students watch the Discovery channel clip on Building Stonehenge, where a person builds a model Stonehenge in their backyard, using only simple machines.

Discuss what simple machines the person used and the feasibility of Stonehenge or the ancient pyramids being built this way. Discuss with students how simple machines make our lives easier.

Demonstrate this by asking students to complete a task without using a simple machine, and then with one. For example, rolling a blind up by just rolling it by hand versus by using the pull cord to smoothly roll the blind. Bring in a variety of common household items and give each student an item. Have them decide which simple machine s the item demonstrates.

This response is called P3b and has not uniquely associated with perception but also with attention and memory processes. The mechanism suggested as an explanation of P3b is a sustained stable activity in recurrent cortical loops.

Another mechanism proposed as a marker of conscious perception, called synchrony, has been also observed within a window of — ms. High-contrast human faces were presented in normal and inverted orientation Rodriguez et al. Synchrony was mainly between occipital, parietal and frontal areas Figure 3B. Furthermore, a new pattern of synchrony in the gamma range emerged around ms during the motor response.

One notable phenomenon from this experiment is the phase scattering presented between these two synchronic responses Varela et al. At this time, the probability of finding synchrony between two EEG electrodes was below the level observed before stimulation Figure 3B. This phase scattering and phase synchronization show an interesting kind of alternation or maybe interference, which should be explained by any theory of consciousness. Figure 3. Neural dynamic associated with awareness and some experimental evidence.

A Three cortical areas recorded in left and right hemisphere posterior parietal, posterior ventral temporal, and inferior frontal present slightly different types of activity evoked by masked targets. The peak in the condition of maximal visibility is associated with P3b around ms. Two phases of cortical activation can be recognized, the first previous to ms corresponds to the activity from the occipital pole toward both parietal and ventral temporal sites.

The second phase, after ms, is characterized by a high-amplitude activity, which mainly appears in ventral prefrontal cortex together with a re-activation of all previous posterior areas. Colors represent six different conditions where the time of the target-mask stimulus onset asynchrony increased in value, allowing the same stimulus to cross a hypothetical threshold from subliminal processing to conscious perception.

Adapted from Del Cul et al. B When high contrast faces are presented to normal subjects a long distance synchrony during face-recognition appears around ms at 40 Hz frequency band. Additionally, the effect disappears if the same stimulus is reversed, avoiding the recognition. Another period of synchrony also appears during the motor response and crucially, a transient phase scattering between both synchronic phases showed a decrease in the probability of synchrony. Upper chart is the time-frequency synchrony activity and inferior chart corresponds to the perception condition mapped onto surface electrodes, where black lines indicate a significant level of synchrony, and green lines indicate a marked phase scattering between electrodes.

Adapted from Varela et al. C In the color phi phenomenon, two disks are shown at different positions with a rapid succession, inducing the illusion of only one disk which changes the color around the middle trajectory. This phenomenon is contrary to a continuous perceptual dynamic because the observer does not have the opportunity to know in advance the new disk color, especially if the perception is not retrospectively built.

Adapted from Herzog et al. D Activity trajectories in Principal Component PC space of visual conscious perception red and blue are different than unconscious perception trajectories gray.

For simplicity, only the first three PCs for subject 2 are shown. The upper-right chart shows the group average Euclidean distance between temporal points for each trajectory [blue right seen vs.

Inferior-right chart corresponds to group average speed of activities trajectories at each time point. Adapted from Baria et al. A recent experiment has additionally demonstrated a transient neural dynamic during visual conscious perception Baria et al. Neural activity, previous, during and post stimuli, was measured with magnetoencephalography MEG.

Then, neural activity was divided into different frequency bands to calculate the multi-dimensional state space trajectory computed with principal component analysis PCA. In the band 0. Crucially, the speed of population activity, measured as a point trajectory in the state space vs. Moreover, conscious stimuli perception was predicted from the activity up to 1 second before stimulus onset Baria et al. Most theories about consciousness assume that the construction of contents of consciousness is part of the same phenomenon that they call consciousness, in the sense of awareness.

Nevertheless, it is equally reasonable to think that the constructions of contents and awareness are two different dynamics of one process, as transient dynamics suggest, or even two completely different processes. One alternative is to think that the construction of contents is a separated process and previous to the process of becoming aware of these contents.

If this is correct, much recent research on consciousness and conscious perception would be inferring information about the construction of these neural objects that are not necessarily associated in a causal way with consciousness itself. Thus, awareness is one process to explain, and the construction of a perception or objects of consciousness would be another. Integration, P3b and synchrony would be, in this sense, part of the construction of neural objects, but not part of the awareness moment where the object becomes part of our conscious perception.

Chronologically, one first stage of information processing should be the constructions of these objects and a second stage would be the awareness of them. Additionally, conscious perception is not always differentiated in awareness and self-reference, but here the distinction is made in order to define clearly different levels of cognition, which would describe two processes of the same conscious phenomenon. In other words, it is possible to state that information processing can be divided into different stages Figure 4 , where awareness is related to one of these stages and self-reference with the recursive processing of this stage.

This is to say that the flux of activity or inactivity would need at least two different stages from which types of cognition emerge , where the first stage corresponds to automatic, non-voluntary control and unconscious information processing, while the second stage would involve a break in this dynamic to allow awareness.

Furthermore, it is proposed here that the recursive processing of awareness within the same neural objects will allow the emergence of self-reference process Figure 4. Figure 4. Types of Cognition, their relation with possible systems and stages of information processing.

A Stage 1 corresponds to automatic and non-conscious processes classical information in principal layers. It is associated with Type 0 Cognition. B Stage 2 is related to awareness and conscious perception as holistic information Type 1 Cognition when two or more principal layer interact.

Both stages form the non-classical system 1 linked with psychological features , which is not necessarily deterministic in a classical way. C Recursive loops of stage 2 would correspond to conscious manipulation processes of contents Self-reference. From the interaction of stage 2, their recursive loops and re-entry of information with system 1, another classical and deterministic system 2 would emerge. However, its existence is doubtful considering that system 2 in living beings, would need system 1 to emerge.

Other experiments also suggest a discrete mechanism instead of a continuous perception mechanism VanRullen and Koch, ; Chakravarthi and VanRullen, ; Herzog et al. For example, evidence for the discrete mechanism of perception comes from psychophysical experiments where two different stimuli are presented with a short time window between each other.

In these experiments, subjects perceived both stimuli as occurring simultaneously, suggesting a discrete temporal window of perception integration VanRullen and Koch, ; Herzog et al. The most relevant experiment supporting a discrete perception is the color phi phenomenon Figure 3C.

In two different locations, two disks of different color are presented in a rapid succession. The observer perceives one disk moving between both positions and changing the color in the middle of the trajectory. Theoretically, the experience of changing color should not be possible before the second disk is seen. For instance, rational calculations e. To illustrate, solving a mathematical equation while cycling or dancing at the same time can be practically impossible.

This observation suggests that conscious perception would be imposing a balance between different processes. Computational interpretation of this observation will try to explain the interference between different kinds of information as a competition for computational capacity or resources.

However, as it is stated above, computational capacity apparently is not playing any crucial role in perception. This analogy also assumes processing of information in a digital way, which could not be the best approach to understand the brain. Finally, some results from behavioral economics and decision making have shown that cognitive biases are not according to classical probability frameworks Pothos and Busemeyer, It means that it is not always possible to describe emergent brain properties with classical and efficient probabilities way.

For example, when one tries to explain, for one side, the biological mechanisms in the brain, and on the other, the human psychological behavioral, crucial differences appear. Some research and theories have shown that the dynamics of neural systems can be interpreted in a classic probabilities framework Pouget et al. While other results, mainly from economic psychology, show cognitive fallacies Ellsberg, ; Gilovich et al. These results are incompatible with the classical probability theories Pothos and Busemeyer, and can be reconciled only after an extra processing of information in experimental subjects.

Therefore, these disconnections between some neural activities in the brain as classical systems , the emerged human behavior and some of their cognitive capabilities non-classical systems , and then another possible classical system suggest complex multiple separate systems with interconnected activity Figure 4C. How can some cognitive capabilities, with apparently non-classical dynamic, emerge from apparently classical, or semi-classical systems as neural networks?

It is one open question that any theory of consciousness should also try to explain. If consciousness is not a matter of computation capacity, given that temporal efficiency decreases in its presence, it could be due to its architecture. Many theories have tried to explain how consciousness emerges from the brain Dehaene et al. However, these theories are incomplete although they might be partially correct. The incompleteness is in part because most of these theories are descriptions of the phenomenon, instead of explanatory theories of the phenomenon.

Descriptive theories focus on how the phenomenon works, use descriptions without causal mechanisms even when they claim it, and without deductive general principles, i. How does it work? The problem, according to Chalmers , is to explain both the first-person data related to subjective experience and the third-person data associated with brain processes and behavior.

Most of the modern theories of consciousness focus on the third-person data and brain correlates of consciousness without any insight about the subjective experience. Moreover, some of the questions stated above as for example the phase scattering, the transient dynamics, the decrease in the peak of EEG activity driven by TMS, the two stages and two systems division, are not explained, and actually, they are not even well-defined questions that theories of consciousness should explain.

Finally, these approaches try to explain awareness and conscious perception in a way that is not clearly replicable or implementable in any sense, neither with biological elements. Some theories also use the implicit idea of computability to explain, for example, conscious contents as the access to certain space of integration; and competition for space of computation in this space, to explain how some processes lose processing capacity when we are conscious.

Another complementary alternative is to understand consciousness as intrinsic property due to the particular form of information processing in the brain. Each principal layer can process information thanks to oscillatory properties and independently of other principal layers hypothesis 2 ; however, when they are activated at the same time to solve independent problems, the interaction generates a kind of interference on each intrinsic process hypothesis 3, the processing component.

From this interaction and interference would emerge consciousness as a whole hypothesis 4. I will call it: Consciousness interaction hypotheses. Consciousness would be defined as a process of processes which mainly interferes with neural integration. Figure 5. Consciousness Interaction Approach and its four hypotheses.

There are two possible interpretations about these principal layers: the first one is the idea that these principal layers are formed by areas structurally connected, and the second possibility is that they are formed by areas only functionally or virtually connected. In the latter, the functional connectivity should be defined by phases and frequency dynamics to avoid in part the bias about neural activity mentioned above.

Experiments and new analyses motivated by these ideas should solve which interpretation is the optimal one. This interference as a superposition or subtraction would be one possible mechanism to one independent neural process interferes with the other and vice versa this is not necessarily excitatory and inhibitory neural interactions. Once this interaction has emerged, each principal layer monitors the other without any hierarchical predominance between layers, and if one process disappears, awareness also disappears.

In this sense, each principal layer cares about its information processing and the other information processing which can affect them. The oscillatory activity at individual neural layers can be interpreted as one stage classical information , and when the new activity emerges thanks to interference between principal layers, the second stage would emerge non-classical information forming one system.

Then, the recursive action of the second stage would allow the emergence of a second system. In the end, both systems as a whole of layers and interactions would be the field of consciousness which cares about its own balance to be able to solve each layer problem.

Each layer cares about some states more than others, based on previous experiences and learning Cleeremans, , but also grounded on the intrinsic interaction between principal layers defined above, which allow them to solve their information processing problems. In other words, depending on the degree and type of interference for a certain experience, the system would feel one or another feeling, even if the external stimulation perceptually speaking is the same for many subjects.

The subjectivity, at least preliminarily, would not directly be more or less neural activity. It would be related to the type and degree of interaction between principal layers emerged by learning, balancing processes thanks to plasticity and sub-emergent properties, which all together try to keep the balance of the whole system. This plasticity would be part of emergent and sub-emergent properties of dynamical systems, probably driven by oscillations and neurotransmitters.

The system would be trained, first by reinforcement learning and later through also voluntary and conscious learning.

These hypotheses might allow us to replicate some neural activities illustrated above, some features of conscious behavior and to explain, for example, why the brain is not always an efficient machine as it is observed in cognitive fallacies, why decisions are not always optimal, especially in moral dilemmas, why it is possible to observe an apparent decrease in processing capacity between different types of information processing in human conscious behavior when we try to perform rational vs.

The sustained interference mechanism would break the stability in principal layers triggering different responses in each one, breaking synchrony, local integration and spreading activity and de-activity around principal layers. It could explain in part the transient dynamic, the scattering phase between two synchronic phases associated with conscious perception and motion reportability, or why the activity after TMS in awareness is globally spread, and more interesting, it would allow us to implement a mechanism on other machines than biological machines, if important soft properties and physical principles of brains, as plasticity and oscillations, are correctly implemented in artificial systems.

Some important differences of this framework with previous approaches are: 1 awareness would emerge from the property of breaking neural integration, synchrony and symmetry of the system; 2 conscious perception would correspond to dynamics operations between networks more than containers formed by networks in which to put contents. Finally, one crucial observation emerges from this discussion.

Otherwise, one principal layer would dominate the interrelated activity, driving the activity in other layers without exchange of roles, which is the opposite approach during other non-conscious conditions, for example, it could be the case. That is why extraordinary capacities in some processes are compensated with normal or sub-normal capacities in other processes of information when we are conscious.

Consciousness interaction is a different framework, therefore it is necessary to re-interpret some definitions from previous theories about consciousness Dehaene et al. Conscious states as different levels of awareness vegetative, sleep, anesthesia, altered states, aware would correspond to different types and degrees of interaction or interference between different networks.

In consciousness interaction hypothesis, consciousness is not a particular state neither has possible states; this is a crucial difference regarding common definitions and theories. In this case, the neural object is restricted to the universe of one principal layer and their local dynamic.

However, they become part of the conscious perception only when two or more principal layers start to share these elements to solve their layer problems. Only at this moment, a neural object appears as part of the field of consciousness. With similar definitions without this particular interference interpretation and their relations, Shea and Frith have identified four categories of cognition Shea and Frith, depending if neural objects and cognitive processes are conscious or not.

In previous sections, these four types of cognition were re-defined Figures 2 , 4 from the inter-relation between awareness and self-reference. In summary, Type 0 cognition corresponds to cognitive processes which are not conscious neither in their neural objects nor operations applied to these objects.

Type 1 cognition is a set of cognitive processes where neural objects are consciously perceived, however operations on them are not manipulated.

Type 2 cognition would correspond to neural objects and operation on these objects consciously perceived and manipulated. According to these definitions Figures 2 , 4 , it is also possible to relate these categories with four categories of machines and their information processing capabilities Signorelli, : 1 The Machine-Machine Type 0 Cognition would correspond to machines and robots that do not show any kind of awareness. These systems cannot know that they know about something that they use to compute and solve problems.

Machine-Machine is not intelligent according to the general definition in section A Sub Set of Human Capabilities and their processes are considered low cognitive capabilities in human. Examples are robots that we are making today with a high learning curve.

In this case, they will have some moral thinking, even when their moral can be completely different than the human moral. The moral thinking is not necessarily restricted to the human morality, because as also happen in different human communities and even human subjects, machines may develop their own type of morality, and this morality can also be non-anthropocentric. Nevertheless, the requirement for any type of moral thinking is the attribution of correct and incorrect behaviors based on what the system cares about the environment, peers and itself, according to a balance between rational and emotional intelligence.

If the machine has the ability of awareness and self-reference, they will develop, or they already developed self-reflection, sense of confidence, some kind of empathy among other processes mentioned to reach moral thoughts.

A clear analogy with humans is not stated here, even when the presence of self-reference as a kind of monitoring process without awareness could be reported in humans. However, the hypothesis about this type of machines is related to Supra reasoning information emerged from organization of intelligent parts of this supra system e. For example in Arsiwalla et al. Another example is Gamez , where some categories defined can be close to some types of machine mentioned above.

However, some crucial differences with these articles are: 1 here, types of machines directly emerge from previous theoretical and experimental definitions of types of cognition.

In this context, types of machines are general categories from the definitions of cognition and its relation with consciousness. Due to these non-optimal processes, each type of machines has limitations Signorelli, , For this machine, the subjective experience could be something completely different to what it means for humans. In other words, Subjective Machines are free of human criteria of subjectivity. Eventually, Super Machine is the only chance for AI to reach and exceed human abilities as such.

Any attempt to accomplish conscious machines and try to overcome human capabilities should start with some of the definitions stated previously. First, it is necessary to define a set or subset of human capabilities which are desirable to imitate or even exceed. This is, actually, a common approach, the only difference is the kind of features which have been replicated or attempted to replicate.

According to this work, most of them are still low-level cognitive tasks for brains. Also in this article, the subset can be considered a very ambitious group of characteristic: Autonomy, reproduction and moral.

Autonomy is already one characteristic considered in AI. Research is currently working to obtain autonomous robots and machines, and nothing opposes to the idea that eventually an autonomous robot can be created.

It would probably not be autonomous in the biological sense, but it could reach a high-level of autonomy. The same can be expected for reproduction.

Machine reproduction will not be a reproduction as in biological entities, but if robots can repair themselves and even make their own replications, the reproduction issue can be considered reached, at least functionally speaking.

However, it is not obvious that genuine moral thinking can be achieved by only improving computational capability or even learning algorithms, specifically, if AI does not add something which is an essential part of the human being: consciousness. Moreover, when some characteristics of human brains are critically reviewed, consciousness is identified as an emergent property that requires at least two other emergent processes: awareness and self-reference.

Thanks to these processes, among others, it is expected to develop high-level cognition which involves processes as self-reflection, mental imagery, subjectivity, sense of confidence, etc, which are needed to show moral thinking. In other words, the way to reach and overcome human features is trying to implement consciousness in robots to attain moral thinking.

However, to try to implement consciousness in robots, a theory is needed that can explain, biologically and physically speaking, consciousness in human brains, dynamics of possible correlates of consciousness, the psychological phenomenon associated with conscious behavior and at the same time, explore mechanisms which can be replicated into machines. It should not be mere descriptions of which areas of the brain are activated or which are the architectures of consciousness, if the interaction between them, from which consciousness would emerge, is not understood.

Therefore, the understanding of emergent properties is not enough and the consideration of crucial plasticity properties of the soft materials in biology, as oscillations, stochasticity, and even noise are very important to also understand sub-emergent properties as plasticity changes influenced by voluntary or conscious activity.

For one side, a more complete theory of consciousness is needed, which relates complex behavior with physical substrates and for another side, we need neuromorphic technologies to implement these theories.

These principal networks try to solve particular problems, and when all of them are activated, sharing and interfering on their own oscillatory processes as a whole, the field of consciousness would emerge as a process of processes.

Additionally, another main attempt explored here was to make evident some paradoxical consequences of trying to reach human capabilities. Thus, types of cognitions were defined not only to show different conscious processes, but also to show that from these categories, it is possible to define four types of machines regarding the implementation of consciousness into machines, and their limitations.

For example, if we can reach the gap to make conscious machine type 1 or 2 cognition, these machines will lose the meaningful characteristics of being a computer, that is to say: to solve problems with accuracy, speed and obedience. Any conscious machine is not a useful machine anymore; unless they want to collaborate with us. It means the machine can do whatever it wants; it has the power to do it and the intention to do it. It could be considered a biological new species, more than a machine or only computer.

More important: according to our previous sections and empirical evidence from psychology and neuroscience Haladjian and Montemayor, ; Signorelli, , it is not possible to expect an algorithm to control the process of emergence of consciousness in this kind of machines, and in consequence, we would not be able to control them.

In other words, even if it were possible to replicate consciousness and high-level cognition, each machine would be different to the other in a way that we are not going to control. If someone expects to have a super-efficient machine, it would be quite the contrary, each machine would be a lottery just as it is when people meet each other.

With this in mind, three paradoxes appear. The first paradox is that the only way to reach conscious machines and potentially overcome human capabilities with computers is by making machines which are not computers anymore. If it is considered that a subset of main features on machines is the capacity to be accurate and fast solving problems, from comments above, any system with subjective capabilities is not accurate anymore, because if they replicate high-level cognitions of human, it is also expected that they will replicate the experience of color or even pain, in a way that it will also interfere with rational and optimal calculations, as well as in humans.

In fact, if the machine is a computer-like-brain, this system will require a human-like-intelligence that apparently also requires a balance between different intelligence, as stated above. Hence, machines type 1 or type 2 cognition would never surpass human abilities, or if it does, it will have some limitations like humans.

The last paradox, if humans are able to build a conscious machine that overcomes human capabilities: Is the machine more intelligent than humans or are humans still more intelligent because we could build it?

The intelligence definition would move again, according to AI successes and new technologies reached. The ultimate goal of all these discussions is to emphasize that trying to make conscious machines or trying to overcome humans is not the path to improve machines, and indeed, to overcome humans is a contradiction in itself. Futurists speak about super machines with super-human characteristics, but they stimulate these ideas without any care about what means to be a human or even simple, but amazing kind of animals which are still much smarter than computers.

To make better machines, science should not focus on anthropocentric presumptions nor compare the intelligence of a machine with human intelligence. The comparison should be according to a general definition of intelligence, as it is stated above. This definition is complex enough and very ambitious goal for any kind of AI.

These machines would be able to imitate some human behavior if needed, but never achieve the genuine social or emotional interaction that humans and animals already have. On another side, the question about replicating human capabilities is still interesting and important, but for reasons which are not efficient, optimal or better machines.

The interest of studying how to implement genuine human features in machines is one academic and even ethical goal, as for example a strategy to avoid animal experimentation. As it was shown above, robots and machines would not be able to replicate the subset of the human being if they do not replicate important features of brains-hardware mentioned previously.

These properties are apparently closely connected with important emergent properties which are a fundamental part of consciousness, and some features of consciousness are needed to replicate moral thinking as a crucial and remarkable capability of human beings.

This approach will not take us to more efficient machines, quite the contrary, these machines will be inefficient and if, for instance, type 1 cognition is achieved, they will be closer to some animals, more than good and simple current machines.

That is why, finally, AI could be divided in 1 Biological-Academic Approach, to achieve human intelligence for academic proposes, as for example, instead of using animals to understand consciousness, trying to use robots to implement theories about how consciousness or other important biological features are working.

However, once the ultimate goal is reached, for instance, the understanding of consciousness, the knowledge should not be used to replicate or massively produce conscious machines. It would be essentially an ethical question, at the same level or even more intractable than cloning animal issues.

The goal is efficiency and performance. In this approach, some principles from biology can be useful, such as modern applications of neural networks, but the final goal would not be to achieve high-level cognition. The implementation in silicon of biological and physical principles of high-level cognition in humans and animals will help us to improve some performances, but these technologies will never replicate truly social interactions, and it should not be expected, because these kinds of interactions are apparently connected with hardware dependences of biological brains.

Of course, it is expected to imitate some of them and even incorporate mixed systems between efficient silicon architectures and inefficient soft materials to reach this goal, but any attempt should be conscious of their intrinsic limitations. These comments seek to motivate discussion. The first objective was to show typical assumptions and misconceptions when we speak about AI and brains.

Perhaps, in sight of some readers, this article is also based on misunderstandings, which would be another evidence of the imperative need for close interaction between biological sciences, such as neuroscience, and computational sciences. The second objective was tried to overcome these assumptions and explore a hypothetical framework to allow conscious machines.

However, from this idea emerge paradoxical conclusions of what a conscious machine is and what it implies. They are part of a work in progress. Thanks to category theory, process theories and others theoretical frameworks, it is expected to develop these ideas on consciousness interaction hypothesis more deeply and relate them with other theories on consciousness, its differences and similarities. In this respect, it is reasonable to consider that a new focus that integrates different theories is needed.

This article is just the starting point of a global framework on the foundation of computation, which expects to understand and connect physical properties of the brain with its emergent properties in a replicable and implementable way to AI. In conclusion, one suggestion of this paper is to interpret the idea of information processing carefully, perhaps in a new way and in opposition to the usual computational meaning of this term, specifically in biological science.

Further discussions which expand this and other future concepts are more likely to be fruitful than mere ideas of digital information processing in the brain. Additionally, although this work explicitly denies the analogy brain-digital-computer, it is still admissible a machine-like-brain, where consciousness interaction could be an alternative to implement high intelligence in machines and robots, knowing the limitations of this approach.

Even if this alternative is neither deterministic nor controlled, and presents many ethical questions, it is one alternative that might allow us to implement a mechanism for a conscious machine, at least theoretically. If this hypothesis is correct and it is possible to reach the gap of its implementation, any machine with consciousness based on brain dynamics may have high cognitive properties.

However, some type of intelligence would be more developed than others, because, by definition, its information processing would also be similar to brains which have these restrictions. Finally, these machines would paradoxically be autonomous in the most human sense of this concept. The author confirms being the sole contributor of this work and has approved it for publication. The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Additionally, the author would like to express his gratitude to journals, open access and creative commons attribution license, which allowed the partial reproduction of figures and descriptions, while others journals like SAGE publications refused reiteratively grant permission and define their policies of fair use, even regarding a clearly adapted figure. The author strongly believes that these practices should be denounced in favor of fair use of scientific production and open access, in view of a more collaborative environment for the science of the present and future.

Finally, the author thanks Frontiers platform and editors to be aligned with these values, support young scientist in different steps of the publication process and contribute to more accessible science for everyone. Aleksander, I.

Computational studies of consciousness. Brain Res. Alvarez-maubecin, V. Functional coupling between neurons and glia. Arsiwalla, X. The morphospace of consciousness. Atasoy, S. Human brain networks function in connectome-specific harmonic waves.

Baars, B. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Bachmann, T. Illusory reversal of temporal order: the bias to report a dimmer stimulus as the first. Vision Res. Baria, A.



0コメント

  • 1000 / 1000