Inductive Reasoning

The two major traditions of research

Kirsty Williamson , ... Sue McKemmish , in Research Methods for Students, Academics and Professionals (Second Edition), 2002

Inductive reasoning

Inductive reasoning begins with particular instances and concludes with general statements or principles. An example is: Tom Carter, Jim Brown and Pam Eliot, who are all aged sixty or over, are not users of the Internet. If there were many other instances which were identical and only a few that were not, it could be concluded: People aged sixty or over are unlikely to be users of the Internet.

Inductive reasoning is associated with the hypothesis generating approach to research. Field work and observations occur initially and hypotheses are generated from the analysis of the data collected. Thus if data were collected which showed that a large majority of people aged sixty or over were not using the Internet (in comparison with those aged under sixty), it could be hypothesised that: Older people (aged sixty or over) are less likely than younger people (aged under sixty) to be users of the Internet.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781876938420500095

Problem Solving: Deduction, Induction, and Analogical Reasoning

F. Klix , in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Drawing Conclusions from Insufficient Information: Induction

Inductive reasoning occurs when a conclusion does not follow necessarily from the available information. As such, the truth of the conclusion cannot be guaranteed. Rather, a particular outcome is inferred from data about an observed sample B′, {B′}⊂{B}, where {B} means the entire population. That is, on the basis of observations about the sample B′, a prediction is made about the entire population B. For example: Smoking increases the risk of cancer. Mr X smokes. How probable is it that Mr X will develop cancer? As this example illustrates, inferential statistics (t-tests, analyses of variance, and other derived forms) are based on this kind of inductive reasoning.

The problems associated with the use of induction in scientific reasoning have been addressed from both philosophical and the mathematical perspective. From the philosophical point of view, the question arises as to what this process—the drawing of universal conclusions about a given phenomenon on the basis of the probability in a simple—actually entails. Logicians, as we now discuss, have responded in a variety of ways (v. Mises, Reichenbach, Keynes, Jeffrey, and Carnap).

Carnap's influential approach to this issue represents a form of compromise. Carnap calculates probability on the basis of accumulated experiential data, where confirmations of one hypothesis as opposed to another are entered into the equation as weights, basically resulting in a kind of weighted mean.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0080430767005441

Expert Systems

Jay E. Aronson , in Encyclopedia of Information Systems, 2003

V.B. Reasoning with Logic

For performing either deductive or inductive reasoning, several basic reasoning procedures allow the manipulation of the logical expressions to create new expressions. The most important procedure is called modus ponens. In this procedure, given a rule "if A, then B," and a fact that A is true, then it is valid to conclude that B is also true. In logic terminology, we express this as [A AND (A → B)] → B.

A and (A → B) are propositions in a knowledge base. Given this expression, we can replace both propositions with proposition B; i.e., we use modus ponens to draw the conclusion that B is true if the first two expressions are true.

A different situation is the inferring that A is false when B is known to be false. This is called modus fallens. Resolution (which combines substitution, modus ponens, and other logical syllogisms) is another approach.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404000678

Inductive Logic

Frederick Eberhardt , Clark Glymour , in Handbook of the History of Logic, 2011

1 Introduction

Any attempt to characterize Reichenbach's approach to inductive reasoning must take into account some of the core influences that set his work apart from more traditional or standard approaches to inductive reasoning. In the case of Reichenbach, these influences are particularly important as otherwise Reichenbach's views may be confused with others that are closely related but different in important ways. The particular influences on Reichenbach also shift the strengths and weaknesses of his views to areas different from the strengths and weaknesses of other approaches, and from the point of some other approaches Reichenbach's views would seem quite unintelligible if not for the particular perspective he has.

Reichenbach's account of inductive reasoning is fiercely empirical. More than perhaps any other account it takes its lessons from the empirical sciences. In Reichenbach's view, an inductive logic cannot be built up entirely from logical principles independent of experience, but must develop out of the reasoning practiced and useful to the natural sciences. This might already seem like turning the whole project of an inductive logic on its head: We want an inductive inference system built on some solid principles (whatever they may be) to guide our scientific methodology. How could an inference procedure that draws on the methodologies of science supply in any way a normative foundation for an epistemology in the sciences?

For Reichenbach there are two reasons for this "inverse" approach. We will briefly sketch them here, but return with more detail later in the text: First, Reichenbach was deeply influenced by Werner Heisenberg's results, including the uncertainty principle, that called into question whether there is a fact to the matter – and consequently whether there can be certain knowledge about – the truth of propositions specifying a particular location and velocity for an object in space and time. If there necessarily always remains residual uncertainty for such propositions (which prior to Heisenberg seemed completely innocuous or at worst subject to epistemic limitations), then – according to Reichenbach – this is reason for more general caution about the goals of induction. Maybe the conclusions any inductive logic can aim for when applied to the sciences are significantly limited. Requiring soundness of an inference – preservation of truth with certainty – may not only be unattainable, but impossible, if truth is not a viable concept for empirical propositions. Once uncertainty is built into the inference, deductive standards are inappropriate for inductive inference not only because the inference is ampliative (which is the standard view), but also because binary truth values no longer apply.

Second, the evidence supporting Albert Einstein's theory of relativity, and its impact on the understanding of the nature of space and time revealed to Reichenbach the power of empirical evidence to overthrow truths that were taken to be (even by Reichenbach himself in his early years) necessarily true. The fact that Euclidean space had been discovered not only to be not necessary, but quite possibly not true — despite Immanuel Kant's transcendental proofs for its synthetic a priori status — called the state of a priori truths into question more generally. The foundations of any inference system could no longer be taken to be a priori, but had to be established independently as true of the world. Reichenbach refers to this confirmation of the correspondence between formal structures and the real world as "coordination" (although "alignment" might have been the more intuitive description of what he meant).

Einstein's and Heisenberg's findings had their greatest impact on Reichenbach's views on causality. Influenced by the Kantian tradition, Reichenbach took causal knowledge to be so profound that in his doctoral thesis in 1915 he regarded it as synthetic a priori knowledge [Reichenbach, 1915]. But with the collapse (in Reichenbach's view) of a synthetic a priori view of space, due to Einstein, Reichenbach also abandoned the synthetic a priori foundation of causality. Consequently, Reichenbach believed that causal knowledge had to be established empirically, and so an inductive procedure was needed to give an account of how causal knowledge is acquired and taken for granted to such an extent that it is mistaken for a priori knowledge. But empirical knowledge, in Reichenbach's view, is fraught with uncertainty (due to e.g. measurement error, illusions etc.), and so this uncertainty had to be taken into account in an inductive logic that formalizes inferences from singular (uncertain) empirical propositions to general (and reasonably certain) empirical claims. Heisenberg's results implied further problems for any general account of causal knowledge: While the results indicated that the uncertainty found in the micro-processes of quantum physics is there to stay, macro-physics clearly uses stable causal relations. The question was how this gap could be bridged. It is therefore unsurprising that throughout Reichenbach's life, causal knowledge formed the paradigm example for considerations with regard to inductive reasoning, and that probability was placed at its foundation.

The crumbling support for such central notions as space, time and causality, also led Reichenbach to change his view on the foundations of deductive inference. Though he does not discuss the foundations of logic and mathematics in any detail, there are several points in Reichenbach's work in which he indicates a switch away from an a prioristic view. The a prioristic view takes logic to represent necessary truths of the world, truths that are in some sense ontologic. Reichenbach rejects this view by saying that there is no truth "inherent in things", that necessity is a result of syntactic rules in a language and that reality need not conform to the syntactic rules of a language [Reichenbach, 1948]. Instead, Reichenbach endorsed a formalist view of logic in the tradition of David Hilbert. Inference systems should be represented axiomatically. Theorems of the inference system are the conclusions of valid deductions from the axioms. Whether the theorems are true of the world, depends on how well the axioms can be "coordinated" with the real world. This coordination is an empirical process. Thus, the underlying view holds that the axioms of deductive logic can only be regarded as true (of the world) and the inference principles truth preserving, if the coordination is successful – and that means in Reichenbach's case, empirically successful, or useful. In the light of quantum theory, Reichenbach rejected classical logic altogether [Reichenbach, 1944].

Instead of an a priori foundation of inductive logic, Reichenbach's approach to induction is axiomatic. His approach, exemplified schematically for the case of causal induction works something like this: We have causal knowledge. In many cases we do not doubt the existence of a causal relation. In order to give an account of such knowledge we must look at how this knowledge is acquired, and so we have to look closely at the methodologies used in the natural sciences. According to Reichenbach, unless we deny the significance of the inductive gap David Hume dug (in the hole created by Plato and Sextus Empiricus), the only way we will be able to make any progress towards an inductive logic is to look at those areas of empirical knowledge where we feel reasonably confident that we have made some progress in bridging that gap, and then try to make explicit (in form of axioms) the underlying assumptions and their justification (or stories) that we tell ourselves, why such assumptions are reliable.

There are, of course, several other influences that left their marks on Reichenbach's views. Perhaps, most importantly (in this second tier), are the positivists. Their influence is particularly tricky, since Reichenbach was closely associated with many members of the Vienna Circle, but his views are in many important ways distinctly "negativist": Reichenbach denies that there can be any certainty even about primitive perception, but he does believe — contrary to Karl Popper — that once uncertainty is taken into account, we can make progress towards a positive probability for a scientific hypothesis. We return to the debate with Popper below.

Second, it is probably fair to say that Richard von Mises, Reichenbach's colleague during his time in Berlin and Istanbul, was the largest influence with regard to the concept of probability. Since probabilistic inferences play such a crucial role in scientific induction, Reichenbach attempted to develop a non-circular foundation and a precise account of the meaning and assertability conditions of probability claims. Reichenbach's account of probability in terms of the limits of relative frequency, and his inductive rule, the so-called "straight rule", for the assertability of probability claims — both to be discussed in detail below — are perhaps his best known and most controversial legacy with regard to inductive inferences.

As with any attempt to describe a framework developed over a lifetime, we would inevitably run into some difficulty of piecing together what exactly Reichenbach meant even if he had at all times written with crystal clarity and piercing precision — which he did not. On certain aspects, Reichenbach changed or revised his view, and it did not always become more intelligible. However, areas of change in Reichenbach's account are also of particular interest, since they give us a glimpse into those aspects that Reichenbach presumably deemed the most difficult to pin down. They will give us an idea of which features he considered particularly important, and which ones still needed work. As someone, who in many senses sat between the thrones of high church philosophy of his time and (therefore?) anticipated many later and current ideas, Reichenbach's views are of particular interest.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444529367500100

Inductive Logic

J.R. Milton , in Handbook of the History of Logic, 2011

Summary

It is striking that no sustained discussion of inductive reasoning has survived from the ancient world. Of course the vast majority of Greek and Roman philosophical works have perished, and are accessible only from fragments quoted by other writers, or often not at all. If more had been preserved, then the patchy and episodic account given above could unquestionably have been made considerably longer and more detailed. There is nevertheless no sign that a major and systematic account of inductive reasoning has been lost: among the many lists of works given by Diogenes Laertius there is no sign of any treatise with the title Peri Epagoges or something similar.

It would appear, therefore, that induction was not something that any of the ancients regarded as one of the central problems of philosophy. Several reasons for this state of affairs can be discerned.

One is that the general drift of philosophy, especially in late antiquity, was also away from the kind of systematic empirical enquiry practised by Aristotle and his immediate successors. Plotinus, for example, used the word epagoge only twice, once for an argument to show that there is nothing contrary to substance, and once for an argument that whatever is destroyed is composite (Enneads, I. 8. 6; II. 4. 6). The kind of understanding gained through empirical generalisation was too meagre and unimportant for the modes of argument leading to it to merit sustained analysis.

Another reason is that interest in the systematic investigation of the natural world was intermittent and localised. It did not help that the scientific discipline in which the greatest advances were made had been mathematical astronomy, and this was not a field where the problems posed by inductive reasoning would have surfaced, still less become pressing. Constructing a model for the motions of a planet was a highly complex business, but it did not involve generalisation from data in the form 'this A is B' and 'this A is B' to 'every A is B'. Ptolemy, indeed, seems to have felt so little urge to generalise that his models for the individual planets are all given separately, and (in the Almagest at least) not integrated into a single coherent system

Finally, the centrality of rhetoric in ancient education meant that when inductive arguments were discussed, they tended to be evaluated for their persuasiveness, not for their logical merits. Inductive arguments became almost lost in a mass of miscellaneous un-formalised arguments that were not investigated for their validity, or any inductive analogue thereof, but for their plausibility in the context of a speech.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978044452936750001X

Artificial Intelligence: Uncertainty

K.B. Laskey , T.S. Levitt , in International Encyclopedia of the Social & Behavioral Sciences, 2001

6 Conclusion

Since 1980 there have been major advances in automating inductive reasoning in AI systems. Modular, tractable, and cognitively plausible representations for uncertainty in AI systems have been developed and implemented in widely available software tools. Uncertainty management is widely recognized as a science and corresponding engineering technology is required for effective knowledge representation and reasoning architectures for all AI systems. The capabilities achieved so far point the way to a much broader set of issues and objectives, many of which concern the possibilities of automated acquisition of inductive knowledge relationships. Graphical network representations combined with data mining, automated learning, and other recent developments are a natural computational environment in which to develop scientific hypotheses and model experimental and other evidential reasoning in support of complex hypotheses. Uncertainty management-driven advances in automated inductive reasoning raise the hope that AI systems might soon accelerate the rate of scientific discovery.

The web site of the Association for Uncertainty in Artificial Intelligence, is a good source of up-to-date information on topics related to uncertainty in artificial intelligence. This site contains links to the online Proceedings of the Conference on Uncertainty in Artificial Intelligence, which contains abstracts for all papers since the conference started in 1985, as well as soft-copy versions of many papers. The web site also contains links to a number of tutorials on subjects related to uncertainty in AI. Good introductions to Bayesian networks can be found in the seminal work by Pearl (1988), the more recent books by Jensen (1996) and Cowell (1999), and a good nontechnical introduction by Charniak (1991). A review of recent work in decision theoretic methods in artificial intelligence is provided by Haddawy (1999). A recent overview of fuzzy set methodology is presented in Dubois and Prade (2000). Smets and Gabbay (1998) describes different approaches to representing uncertainty. The anthology by Pearl and Shafer (1990) compiles a number of seminal articles in the field of uncertainty in artificial intelligence.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0080430767003958

Inductive Logic

Nick Chater , ... Evan Heit , in Handbook of the History of Logic, 2011

Theoretical Accounts of Inductive Reasoning

So far, we have described several empirical regularities in inductive reasoning, including similarity effects, typicality effects, diversity effects, and effects based on knowledge about the property being inferred. Together, these results pose a challenge for psychological accounts of induction. Although there have been a number of proposals (see, in particular, [ Osherson et al., 1990; Sloman, 1993]), we will focus on a model of inductive reasoning by Heit [1998] (see also [Tenenbam and Griffith, 2001; Kemp and Teenbaum, 2009]) that has been applied to all of these results. This is a model derived from Bayesian statistics and we will show that people's inductive reasoning behaviour does indeed seem to follow the dictates of inductive logic.

According to the Bayesian model, evaluating an inductive argument is conceived of as learning about a property, in particular learning for which categories the property is true or false. For example, in argument (1) above, the goal would be to learn which animals have sesamoid bones and which animals do not. The model assumes that for a novel property such as in this example, people would rely on prior knowledge about familiar properties, to derive a set of hypotheses about what the novel property may be like. For example, people know some facts that are true of all mammals (including cows), but they also know some facts that are true of cows but not some other mammals. The question is which of these known kinds of properties does the novel property, "has sesamoid bones," resemble most. Is it an all-mammal property, or a cow-only property? What is crucial is that people assume that novel properties follow the same distribution as known properties. Because many known properties of cows are also true of other mammals, argument (1) regarding a novel property seems fairly strong.

The Bayesian model addresses many of the key results in inductive reasoning. For example, the model can predict similarity effects as in [Rips, 1975]. Given that rabbits have some kind of disease, it seems more plausuble to infer that dogs have the same disease rather than bears, because rabbits and dogs are more alike in terms of known properties than are rabbits and bears. The Bayesian model also addresses typicality effects, under the assumption that according to prior beliefs, atypical categories, such as geese, would have a number of idiosyncratic features. Hence a premise asserting a novel property about geese would suggest that this property is likewise idiosyncratic and not to be widely projected. In contrast, prior beliefs about typical categories, such as bluejays, would indicate that they have many properties in common with other categories, hence a novel property of a typical category should generalize well to other categories.

The Bayesian model also addresses diversity effects, with a rationale similar to that for typicality effects. An argument with two similar premise categories, such as cows and horses in (5), could bring to mind a lot of idiosyncratic properties that are true just of large farm animals. Therefore a novel property of cows and horses might seem idiosyncratic to farm animals, and not applicable to other mammals. In contrast, an argument with two diverse premise categories, such as cows and ferrets in (6), could not bring to mind familiar idiosyncratic properties that are true of just these two animals. Instead, the prior hypotheses would be derived from known properties that are true of all mammals or all animals. Hence a novel property of cows and ferrets should generalize fairly broadly.

To give a final illustration of the Bayesian approach, when reasoning about the anatomical and behavioral properties in [Heit and Rubinstein, 1994], people could draw on prior knowledge about different known properties for the two kinds of properties. Reasoning about anatomical properties could cause people to rely on prior knowledge about familiar anatomical properties. In contrast, when reasoning about a behavioural property such as "prefers to feed at night," the prior hypotheses could be drawn from knowledge about familiar behavioural properties. These two different sources of prior knowledge would lead to different patterns of inductive inferences for the two kinds of properties.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444529367500148

Investigation

Christopher A. Hertig , in The Professional Protection Officer, 2010

Investigative Logic

Investigation is a logical, systematic process. Investigators use two types of logic: inductive reasoning and deductive reasoning. With inductive and deductive reasoning, a hypothesis is constructed about what has occurred. Facts are then gathered that either support it or reject it. There may be only a few pieces of the puzzle available so that the investigator must try and search for other pieces. The investigator must look at what is most likely to have occurred so that investigative efforts are not wasted. Note that investigation often seeks to narrow the focus of the inquiry: instead of pursuing a vast array of possibilities, the investigator seeks to reduce them logically to a more manageable number. Inductive and deductive reasoning may aid in doing this. Inductive and deductive reasoning is also more likely to be used in intelligence analysis. When assessing intelligence information, it may be necessary to construct a theory of the case.

In inductive and deductive reasoning, facts are collected. Next, a theory about what occurred is formulated. The pieces of the puzzle are obtained and put into place. The fictionalized Sherlock Holmes and the real-life Allan Pinkerton used deduction. Investigation of crimes, accidents, or work rule violations require the use of inductive and deductive reasoning.

Each form of reasoning has its place. An investigative inquiry may begin with inductive reasoning and then become deductive. Investigators must always make sure that they are logical and objective. Investigators must never let their prejudices or preconceived notions interfere with their work. If exculpatory evidence—evidence that tends to disprove that the suspect committed the offense—is discovered, it cannot be ignored!

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781856177467000316

Principles and Methods for Data Science

Kalidas Yeturu , in Handbook of Statistics, 2020

1 Introduction

Artificial intelligence (AI) is a formal framework of system representation and reasoning that encompasses inductive and deductive reasoning methodologies for problem formulation and solution design (Fig. 1) (Russel and Norvig, 2003). The newly emerged stream of study, data science refers to application of techniques and tools from statistics, optimization, and logic theories to ordered or unordered collections of data. The field has acquired a number of key terms and the terminology is slowly growing based on which technique is prominently used and what type of data it is applied to. Coming to the basic definitions, data is a time stamped fact, albeit noisy, recorded by a sensor in the context of a process of a system under study. Each datum is effectively represented as a finite number sequence corresponding to a semantic in the physical world. The science aspect of data science is all about symbolic representation of data, mathematical, and logical operations over the same and relating the findings back to the physical world scenarios. On the implementation front, the engineering systems that store and retrieve data at scale both in terms of size and frequency are referred to as Big Data systems. The nature of the input data is closely tied to its context, field of study, use case scenario, and discipline. The word data is highly analogous to the word signal in the field of signal processing, thereby inheriting a majority of techniques in the field of signal processing to the discipline of data science, with one popular example being machine learning methodology.

Fig. 1

Fig. 1. Artificial intelligence framework—A real world problem or task is represented as a symbolic world problem or computer program composed of entities (e.g., data structures) and operations (e.g., algorithms). Encoding is the first step, identifying and formulating a problem statement. Experiments, inferences, and metrics are iterated at much lower costs in the symbolic world. The inferences are related back to the real world through decoding. A real world problem itself is recursively defined as composed of decomposition and synthesis for creating a conglomeration of multiple subproblems which themselves may be a simulations as well and synthesis of results for a holistic inference.

Some of the characteristics of the data and the noise thereof pertain to the set theoretic notions, the nature of the data sources and popular domain categories. The characteristics include missing values, ordered or unordered lists of variable lengths and homogeneous or heterogeneous groups of data elements. The sources that generate data include raw streams or carefully engineered feature transformations. Some of the popular domain categories of data include numeric, text, image, audio, and video. The word numeric is an umbrella term and it includes any type of communication between state-of-the-art computer systems. The techniques that operate on data include statistical treatment, optimization formulation, and automatic logical or symbolic manipulation. For numeric type of data, the techniques essentially deduce a mapping function that optimally maps a given input to the output represented as number sequences, typically of different sizes. The statistical approaches used here, mainly fall into probabilistic generative and discriminative methodologies. The optimization techniques used mainly involve discrete and continuous state space representation and error minimization. The logical manipulation techniques involve determining rules and deducing steps to prove or disprove assertions.

This chapter, as it belongs in a broader theme of practices and principles for data science, elucidates the mapping process of a given problem statement to a quantified assertion driven by data. The chapter focuses on aspects of machine learning algorithms, applications, and practices. The mapping process first identifies characteristics of the data and the noise, followed by defining and applying mathematical operations that operate on data and finally inferring the findings to relate back to the given problem scenario and its context. Most of the popular and much needed categories of techniques as of today are covered to a good amount of depth in the light of data science aspects within the scope of this chapter, while any domain specific engineering aspects are referred to appropriate external content. Machine learning approaches covered here, include discriminative and generative modeling methodologies such as supervised, unsupervised, and deep learning algorithms. The data characterization topics include practices on handling missing values, resolving class imbalance, vector encoding, and data transformations. The supervised learning algorithms covered include decision trees (DT), logistic regression, ensemble of classifiers including random forest and gradient boosted trees, neural networks, support vector machines (SVM), naive Bayes classifier, and Bayesian logistic regression. The chapter includes standard model accuracy metrics, ROC curve, bias-variance trade off, cross validation (CV), and regularization aspects. Deep learning algorithms covered include autoencoders, CNN, RNN, and LSTM methods. Unsupervised mechanisms such as different types clustering and special category of reinforcement learning methodologies and learning using EM in probabilistic generative models including GMM, HMM, and LDA are also discussed. As industry emphasizes heavily on model maintenance, techniques involving setting alarms for data distribution difference and retraining, transfer learning, and active learning methodologies are described. A cursory description of symbolic representation and reasoning in the AI topics including state space representation and search, first-order logic—unification and deduction are also presented with references to external text handling the topic to full depth. The workings of the algorithms are also explained over case studies on top of popular data sets as of today, with references to code implementations, libraries, and frameworks. A brief introduction to currently researched topics including on automatic machine learning model building, model explainability and visualization will also covered in the chapter. Finally the chapter concludes with an overview of the Big Data frameworks for distributed data warehousing systems available today with examples on how data science algorithms use the frameworks to work at scale.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0169716120300225

Number Theory, Elementary

Robert L. Page , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

IV.A Conjectures

When statements concerning number patterns are proved to be true, they are called theorems. Before that, they are conjectures, nothing more than educated guesses based on inductive reasoning applied to a finite number of particular cases. Conjectures, then, fall in a kind of no-man's-land between the list of facts that have been shown to be true and the host of statements that have been shown to be false. They remain there until they are proven to be true theorems or until a counterexample shows them to be false.

Some conjectures enjoy long lives before they are disproved. For example, Fermat's conjecture that numbers of the form 2 2 t + 1 are always prime survived for a hundred years before it died at the hands of Euler. Many other patterns are disposed of almost as quickly as they appear. As an example, consider some patterns which seem to generate primes. For p 1 2  +   (p 1  +   1)p 2 we have

2 2 + 3 3 = 13 3 2 + 4 5 = 29 5 2 + 6 7 = 67 7 2 + 8 11 = 137 11 2 + 12 13 = 277 ,

all of which are prime. However, 132  +   14(17)   =   407   =   11(37).

Again, consider

p 1 p 2 + p 1 + p 2

:

2 3 + 2 + 3 = 13 3 5 + 3 + 5 = 251 5 7 + 5 + 7 = 78 , 137 ,

all of which are prime. Numbers in this sequence grow in size rapidly. Thus, 711  +   7   +   11   =   1,977,326,761, which is harder to check for primality by use of a pocket calculator than its predecessors. Of course, it is easily handled by a modern computer. Even this is not necessary, however, since the next term 1113  +   11   +   13   =   1113  +   24 is clearly divisible by 5.

The numbers 31, 331, 3331, and 33,331 are all primes. In fact, the pattern continues to yield primes until we reach 333,333,331, which is divisible by 17.

Another interesting pattern is

3 ! 2 ! + 1 ! = 5 4 ! 3 ! + 2 ! 1 ! = 19 5 ! 4 ! + 3 ! 2 ! + 1 ! = 101 6 ! 5 ! + 4 ! 3 ! + 2 ! 1 ! = 619 7 ! 6 ! + 5 ! 4 ! + 3 ! 2 ! + 1 ! = 4421 8 ! 7 ! + 6 ! 5 ! + 4 ! 3 ! + 2 ! 1 ! = 35 , 899 ,

which yields the primes listed. Unfortunately, the next step gives 326,981, which is divisible by 79.

The following pattern gives primes for many steps:

41 + 2 = 43 43 + 4 = 47 47 + 6 = 53 53 + 8 = 61 61 + 10 = 71 71 + 12 = 83.

In fact, it will give primes for another 33 steps before the composite number 1681   =   412 appears. The pattern is made up of numbers obtained from the formula x 2  + x  +   41, which we have previously discussed.

Finally, consider the pattern of integers in which each succeeding row is obtained by inserting the number of the row, n, between each pair of integers from row n    1   whose sum is n:

n Number of terms 1 1 2 2 121 3 3 13231 5 4 1432341 7 5 15435253451 11 6 1654352534561 13

This pattern fails for row 10, which contains 33 terms. If we count the number of digits in each row, the tenth row contains 37 digits, a prime number. However, the 11th row contains 57 digits, a composite number.

A recent conjecture is due to Reo F. Fortune who examined the pattern

2 + 1 = 3 5 2 = 3 2 3 + 1 = 7 11 6 = 5 2 3 5 + 1 = 31 37 30 = 7 2 3 5 7 + 1 = 211 223 210 = 13 2 3 5 7 11 + 1 = 2311 2333 2310 = 23 2 3 5 7 11 13 + 1 = 30 , 031 30 , 047 30 , 030 = 17

The first five sums in the left-hand column are primes, but 30,031   =   59(509). However, for each sum if we find the next larger prime and subtract from it the product of consecutive primes given in that row, the result is prime. Fortune's conjecture is that this pattern always gives primes. Many feel that the conjecture is true but proving it appears to be a difficult task.

The question of whether there exist an infinite number of Mersenne primes has been unsolved for approximately 300 years, as has the companion question of the existence of an infinite number of even perfect numbers. To date, only 28   Mersenne primes are known, so the question is far from resolved.

Catalan's Conjecture, due to EugĂ©ne Charles Catalan (1814–1894), states that 8 and 9 are the only positive consecutive integral powers of integers.

In general, this suggests that x m   y n   =   1, where x and y are integers greater than 0 and m and n are integers greater than 1, has as its only solution x  =   3, y  =   2, m  =   2, and n  =   3. Since m and n vary, as well as x and y, the equation above is a Diophantine equation that is not in polynomial form.

In 1974, Robert Tijdeman proved that there exists a constant k with the property that all powers of integers which equal consecutive integers are less than k. Thus, we know that there can be at most a finite number of such pairs of consecutive powers, although the work of calculating k seems too formidible to allow a definite value to be determined at present.

In 1876, Catalan also examined the sequence P 0  =   2 and P n+1  =   2 P n     1. Thus,

P 1 = 2 P 0 1 = 2 2 1 = 3 P 2 = 2 P 1 1 = 2 3 1 = 7 P 3 = 2 P 2 1 = 2 7 1 = 127 P 4 = 2 P 3 1 = 2 127 1.

He speculated that P n is prime for n  =   1, 2, 3, and 4, all of which were subsequently verified. P 5 seems to be undecidable since it has approximately 1038 digits.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105005032