Not to be confused with ``the graph of a function'' or with ``graph'' as in
``graphic'' *combinatorial graphs* consist of a set of *vertices*
(non-empty, and frequently finite), a set of *edges* and an incidence
relation between edges and unordered pairs of vertices. Two vertices incident
with the same edge are called *adjacent*. If the two vertices coincide,
the edge is a ``loop'' and it is also possible that more than one edge are
incident with the same pair of vertices (such edges are called ``parallel'').
Frequently, loops and parallel edges are excluded, and such graphs are called
``Michigan'' graphs in honor of the many fine graph theorists who have worked
there. In particular, the distinction is largely due to Frank Harary who is one
of the founders of the field and whose book, *Graph Theory*,
Addison-Wesley, Reading, MA, 1969, helped to standardize the terminology.

I like graphs since they are (1) easy to draw (at least, when they are finite ;-) and (2) already sufficient to model many very interesting theoretical and practical phenomena. From electrical networks to hydrocarbon molecules, from sociology to quantum electrodynamics, graphs have been popping up with increasing frequency during the 19th and 20th centuries, and they are now viewed as a fundamental concept. Interested readers will find the Journal of Graph Theory, the Journal of Combinatorial Theory, Combinatorica, the Bulletin of the Institute of Combinatorics and its Applications, and a host of other mathematically oriented publications. Moreover, graphs with additional structure constitute a large portion of Operations Research, Computer Science and Neural Networks.

As an example of the ubiquitousness of graphs, let us consider the problem of modeling interpersonal relationships. One might represent the people as vertices with an edge joining two distinct vertices if the two individuals are acquainted with each other. Otherwise, we shall assume they do not know each other. (This, as usual, simplifies the ambiguities of life but that is typical for all modeling.) It is the easiest example of "Ramsey theory" that if the number of people is at least 6, there must either exist 3 people each of whom knows the other two or else 3 people neither of whom knows either of the other two. Another result in this area is called Turan's theorem and basically says that once the number of acquaintanceships is sufficiently large compared to the total number of people, there must exist a subset of individuals of some particular cardinality for which each pair are acquainted.

One interesting area of research involves decompositions of graphs into
subgraphs of various types and in various ways. For instance, one can decompose
the edges of any graph G into a family of subgraphs so that each subgraph
consists of a vertex-disjoint union of paths. The smallest cardinality, la(G),
of such a family is called the *linear arboricity* of G. Akiyama, Exoo
and Harary conjectured that if Delta(G) denotes the maximum degree of G, then
la(G) is not larger {(1+Delta(G))/2}, where {a} is the least integer \geq to a;
so if Delta were 20, one would have 11 as an upper bound on the linear
arboricity. Plainly la(G) can't be less than {Delta(G)/2} since each
disjoint-paths-component uses up at most two of the edges at any fixed vertex.
Thus, the AEH conjecture says that the worst case differs by at most one from
the best. Many of the natural problems of this type are still unsolved though
there are many beautiful theorems as well.

The good news about graph theory arguments is that they are frequently elementary (i.e., they stand alone, without dependence on elaborate theories from other disciplines); the bad news is that these arguments can therefore be extremely complex and subtle. As a result, it is not unknown for errors to be found in established ``proofs'' sometimes many years after publication. This was the case in my own first step in the discipline, when I found (independently along with Gerhard Ringel and Richard Guy) an error in a crossing number argument which had appeared more than ten years earlier in Fundamenta Mathematicae. Even today, three decades later still, we do not know whether Zarankiewicz's claim regarding the crossing number of the complete bipartite graph is true. For a bit more on this topic, see remarks on crossing numbers.

The *chromatic number* of a graph is the smallest number of distinct
labels (usually called "colors") with which the vertices of the graph can be
colored so that no two adjacent vertices receive the same color. Of course, if
the graph is finite and has no loops, then the chromatic number exists but in
general it is a very difficult problem to determine it for a large class of
graphs - or even for a specific graph which has a fairly large number of
vertices. The famous Four Color Theorem (see, e.g., my book with T. L. Saaty)
asserts that the chromatic number of any planar graph is at most 4 but the only
known method of proof requires the use of a computer to enumerate a large number
of special cases. However, a rather elegant formula due to Heawood provides an
upper bound for all surfaces other than the plane and the work of Ringel and
Youngs (et al.) has shown that the Heawood upper bound is always achievable. For
instance, the chromatic number of any toroidal graph is at most 7.

It is sometimes interesting to consider graphs which have infinitely many vertices. For example, suppose we form a graph whose vertices are the points of the plane in which two vertices are adjacent if and only if the corresponding points have distance exactly 1. It is known that the resulting graph has chromatic number at least 4 and at most 7, but for decades no one has been able to determine the value. See, e.g., Croft, Falconer and Guy, _Unsolved Problems in Geometry_, Springer-Verlag, NY.