My first discovery: *Winnipeg is not that bad!* In fact I loved it. Example: here is the view from the window of my room in the university residence:

Not bad, right? A very beautiful sight to wake up to in the morning. (I got the impression, that Winnipeg is nothing to look forward to, from Canadians. People of the world: don’t listen to Canadians when they say something bad about any place that just doesn’t quite live up to the standard of Montreal, Vancouver, or Banff.) Here is what you see if you look from the other side of the building:

The conference was very broad and diverse in subjects, as it brings together people working in Operator Theory as well as in Operator Algebras (and neither of these fields is very well defined or compact). I have mixed feelings about mixed conferences. But since I haven’t really decided what I myself want to be working on when I grow up, I think they work for me.

I was invited to give a series of three talks that I devoted to noncommutative function theory and noncommutative convexity. My second talk was about my joint work with Guy Salomon and Eli Shamovich on the isomorphism problem for algebras of bounded nc functions on nc varieties, which we, incidentally, posted on the arxiv on the day that the conference began. May I invite you to read the introduction to that paper? (if you loike it, also take a look at the previous post).

On this page you can find the schedule, abstract, and the slides of most of the talks, including mine. Some of the best talks were (as it happens so often) whiteboard talks, so you won’t find them there. For example, the beautiful series by Aaron Tikuisis was given like that and now it is gone (George Elliott remarked that a survey of the advances Tikuisis describes would be very desirable, and I agree).

Aaron Tikuisis gave a beautiful series of talks on the rather recent developments in the classification theory of separable-unital-nuclear-simple C*-algebras (henceforth SUNS C*-algebras, the algebra is also assumed infinite dimensional, but let’s make that a standing hypothesis instead of complicating the acronym). I think it is fair to evaluate his series of talks as the most important talk(s) in this conference. In my opinion the work (due to many mathematicians, including himself) that Tikuisis presented can be described as the resolution of the Elliott conjecture; I am sure that some people will disagree with the last statement, including George Elliott himself.

Given a SUNS C*-algebra , one defines it’s * Elliott invariant, *, to be the K-theory of , together with some additional data: the image of the unit of in , the space of traces of , and the pairing between the traces and K-theory. It is clear, once one knows a little K-theory, that if and are isomorphic C*-algebras, then their Elliott invariants are isomorphic, in the sense that is isomorphic to for (in a unit preserving way), and that is affinely homeomorphic with in a way that preserves the pairing with the K-groups. Thus, if two C*-algebras are known to have a different K-group, or a different Elliott invariant, then these C*-algebras are not isomorphic. This observation was used to classify AF algebras and irrational rotation algebras (speaking of which, I cannot help but recommend my friend Claude Schochet’s recent “Notice’s” article on the irrational rotation algebras).

In the 1990s, George Elliott made the conjecture that two SUNS C*-algebras are *-isomorphic if and only if . This conjecture became one of the most important open problems in the theory of operator algebras, and arguably **THE** most important open problem in C*-algebras. Dozens of people worked on it. There were many classes of C*-algebras that were shown to be * classifiable* – meaning that they satisfy the Elliott conjecture – but eventually this conjecture was shown to be false in 2002 by Rordam, who built on earlier work by Villadsen.

Now, what does the community do when a conjecture turns out to be false? There are basically four things to do:

- Work on something else.
- Start classifying “clouds” of C*-algebras, for example, show that crossed products of a certain type are classifiable within this family (i.e. two algebras within a specified class are isomorphic iff their Elliott invariants are), etc.
- Make the class of algebras you are trying to classify smaller, i.e., add assumptions.
- Make the invariant bigger. For example, is not enough, so people used . When that turned out to be not enough, people started looking at traces. So if the current invariant is not enough, maybe add more things, the natural candidate (I am told) being the “Cuntz Semigroup”.

The choice of what to do is a matter of personal taste, point of view, and also ability. George Elliott has made the point that choosing 4 requires one to develop new techniques, whereas choosing 3 is kind of focused around the techniques, making the class of C*-algebras smaller until the currently known techniques can tackle them.

Elliott’s objections notwithstanding, the impression that I got from the lecture series was that most main forces in the field agreed that following the third adaptation above was the way to go. That is, they tried to prove the conjecture for a slightly more restricted class of algebras than SUNS. Over the past 15 years or so (or a bit more), they identified an additional condition – let’s call it Condition Z – that, once added to the standard SUNS assumptions, allows classification.And it’s not that adding the additional assumptions made things really easy, it only made the proof *possible* – still it took first class work to even identify what assumption needs to be added, and more work to prove that with this additional assumptions the conjecture holds. They proved:

**Theorem (lot’s of people): ***If and are infinite dimensional SUNS C*-algebras, which satisfy the Universal Coefficient Theorem and an additional condition Z, then if and only if .*

I consider this as the best possible resolution of the Elliott conjecture possible, given that it is false!

A major part of Aaron’s talks was to explain to us what this additional condition Z is. (What the Universal Coefficient Theorem though, was not explained and, if I understand correctly, it is in fact not known whether this doesn’t follow immediately for such algebras). In fact, there are two conditions that one can take for “condition Z”: (i) Finite nuclear dimension, and (ii) Z-stability. The notion of nuclear dimension corresponds to the regular notion of dimension (of the spectrum) in the commutative case. Z-stability means that the algebra in question absorbs the *Jiang-Su algebra* under tensor products in a very strong sense. Following a very long tradition in talks about the Jiang-Su algebra – Aaron did not define the Jiang-Su algebra. This is not so bad, since he did explain in detail what finite nuclear dimension means, and said that Z-stability and finite nuclear dimension are equivalent for infinite dimensional C*-algebras (this is the *Toms-Winter conjecture*).

What was very nice about Aaron’s series of talks was that he gave von Neumann algebraic analogues of the theorems, conditions, and results, and explained how the C*-algebra people got concrete inspiration from the corresponding results *and proofs* in von Neumann algebras. In particular he showed the parallels to Connes’s theorem that every injective type factor with separable predual is isomorphic to the hyperfinite factor. He made the point that separable predual in the von Neumann algebra world corresponds to separability for C*-algebras, hyperfiniteness corresponds to finite nuclear dimension, and factor corresponds to a simple C*-algebra. He then sketched the lines of the proof of the part of Connes’s theorem that says that injectivity of a factor implies hyper-finiteness of (which by Murray and von Neumann’s work implies that is the hyperfinite factor). After that he repeated a similar sketch for the proof that -stability implies finite nuclear dimension.

This lecture series was very inspiring and I think that the organizers made an excellent choice inviting Tikuisiss to give this lecture series.

Christopher Ramsey gave a short talk on “residually finite dimensional (RFD) operator algebras”. This talk is based on the paper that Chris and Raphael Clouatre recently posted on the arxiv. The authors take the notion of residual finite dimensional, which is quite well studied and understood in the case of C*-algebras, and develop it in the setting of nonselfadjoint operator algebras. It is worth noting that even a finite dimensional nonselfadjoint operator algebra might fail to be representable as a subalgebra of a matrix algebra. So it is worth specifying that an operator algebra is said to be RFD if it can be completely isometrically embedded in a direct sum of matrix algebras (and so it is not immediate that a finite dimensional algebra is RFD, though they prove that it is).

What I want to share here is a neat and simple observation that Chris and Raphael made, which seemed to have been overlooked by the community.

When we study operator algebras, there are several natural relations by which to classify them: completely isometric isomorphism, unitary equivalence, completely bounded isomorphism, and similarity. Clearly, unitary equivalence implies completely isometric isomorphism, and similarity implies completely bounded isomorphism. The converses do not hold. However, in practice, many times (for example in my recent paper with Guy and Eli) operator algebras are shown to be completely boundedly isomorphic by exhibiting a similarity between them. That happens because we are many times interested in the “multiplicity free case”.

[**Added in June 11, following Yemon’s comment:** We say that is * similar* to if there is an invertible such that . Likewise, two maps and are said to be

Raphael and Chris observed that, in fact, completely bounded isomorphism is the same as similarity, modulo completely isometric isomorphisms. To be precise, they proved:

**Theorem (the Clouatre-Ramsey trick): ***If and are completely boundedly isomorphic, then and are both completely isometrically isomorphic to algebras that are similar. *

**Proof: **Suppose that and . Let be a c.b. isomorphism. By Paulsen’s theorem, is similar to a completely contractive isomorphism . So we get that the map

decomposes as a product of a complete isometry and a similarity. Likewise, the completely bounded isomorphism is similar to a complete contraction , and we have that

decomposes as the product of a similarity and a complete isometry. Since the composition of all these maps is , the proof is complete.

]]>Our goal in this post is to give several answers to this question and its generalisations. In order to obtain elegant answers, we work over the complex field (e.g., there are many polynomials, such as , that have no real zeros; the fact that they don’t have real zeros tells us something about these polynomials, but there is no way to “recover” these polynomials from their non-existing zeros). We will write for the algebra of polynomials in one complex variable with complex coefficients, and consider it as a function of the complex variable . We will also write for the algebra of polynomials in (commuting) variables, and think of it – at least initially – as a function of the variable .

Let us begin by recalling that by the Fundamental Theorem of Algebra, every (one variable) polynomial decomposes into a product of linear factors. Thus, if we know the zeros **including their multiplicities **then we can determine the polynomial up to multiplicative factor. Moreover, if we know that the zeros of some polynomials are , then we know that must have the form

(*) ,

where the can, in principle, be any positive integers.

Let us reformulate the above observation in a slightly different language, which generalizes well to the multivariable setting. If is polynomial, we write

Every polynomial generates a principal ideal . Conversely, every ideal in is principal. For an ideal we write

for all .

If , then . Now, if we begin with a polynomial as in (*), and we are given such that , what can we say about ? Well, if we knew that the zeros of have the same multiplicities as those of , then we would know that for some nonzero scalar , and in particular we would know that (and vice versa, of course). However, in general, only implies that , which is usually larger than . Note that if , then , because is clearly equal to the product of and some other polynomial.

Now let us consider the much richer case of polynomials in several commuting variables. For brevity, let us write for the vector variable , and let us write . Since this algebra is **not** a principal ideal domain (that’s an easy exercise), it turns out to be more appropriate to talk about ideals rather than single polynomials. Let us define the zero locus similarly to as above:

for all .

We also introduce the following notation: given , we write

for all .

Note that is always an ideal.

The question now becomes: to what extent can we recover from ? A slightly different but related question is: what is the gap between and ? We know already from the one variable case that we cannot hope to fully recover an ideal from its zero locus, but it turns out that a rather satisfactory solution can be given.

Suppose that is a polynomial which it not necessarily contained in , but that for some (think, for example, of and ). Then since , we also have that , so . So the ideal contains at least all polynomials such that .

**Definition:** Let . The ** radical **of is the ideal

there exists some such that .

(On the left hand side, there are two different commonly used notations for the radical).

**Exercise:** The radical of an ideal is an ideal.

**Theorem (Hilbert’s Nullstellensatz): **For every ,

.

Nullstellensatz means “theorem of zero locus” in German, and we can all agree that this is an apropriate name for this theorem. We shall not prove this theorem; it is usually proved in a first or second graduate course in commutative algebra. It is a beautiful theorem, indeed, but it is not **perfect**. Below we shall obtain a perfect Nullstellensatz, that is one in which the ideal is completely recovered by the zeros, with no need to take a radical. Of course, we will need to change the meaning of “zeros”.

My recent work in operator algebras and noncommutative analysis has led me, together with my collaborators Guy Salomon and Eli Shamovich, to discover another Nullstellensatz (actually, we have a couple of Nullstellensatze, but I’ll tell you only about one). This result has already been known to some algebraists in one form or another – after we proved it, we found that it can be dug out of a paper of Eisenbud and Hochester – but does not seem to be well known. I will write the result and its proof in a language that I (and therefore, hopefully, anyone who’s had some graduate commutative algebra) can understand and appreciate.

Let denote the set of **all** -tuples of matrices. We let be the disjoint union of all -tuples of matrices, where runs from to . That is, we are looking at all -tuples of commuting matrices of all sizes. This set is referred to in some places as “the noncommutative universe”. Elements of can be plugged into polynomials in noncommuting variables, and subsets of are where most of the action in “noncommutative function theory” takes place. We leave that story to be told another day.

Similarly, we let denote the set of all commuting -tuples of matrices. Note that we can consider to be the space , and then is an algebraic variety in given as the joint zero locus of quadratic equations in variables. We let . Now we are looking at all commuting -tuples of commuting matrices of all sizes. This can be considered as the “commutative noncommutative universe”, or the “free commutative universe”. Another way of thinking about , is as the “noncommutative variety” cut out in by the equations (in noncommuting variables)

, .

Points in can be simply plugged in any polynomial , for example, if and , then for , we put

,

where is the identity of the same size as (that is, if , then the correct identity to use is ). In fact, points in can be naturally identified with the space of finite dimensional representations of , by

.

(We shall use the word “representation” to mean a homomorphism of an algebra or ring into for some ).

Now, given an ideal , we can consider its zero set in :

for all .

(We will omit the subscript for brevity.) In the other direction, given a subset , we can define the ideal of functions that vanish on it:

for all .

Tautologically, for every ideal ,

,

because every polynomial in annihilates every tuple on which every polynomial in is zero, right? The beautiful (and maybe surprising) fact is the converse.

We are now ready to state the free commutative Nullstellensatz. The following formulation is taken from Corollary 11.7 from the paper “Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball” by Guy Salomon, Eli Shamovich and myself (which I already advertised in an earlier blog post).

**Theorem (free commutative Nullstellensatz): **For every ,

.

**Proof: **This proof should be accessible to someone who took a graduate course in commutative algebra (but not too long ago!). We shall split it into several steps, including some review of required material. Someone who is fluent in commutative algebra will be able to understand the proof by just reading the headlines of the steps without going into the explanations. Recall that we are using the notation .

**Step I: Changing slightly the point of view:** what we shall prove is the following proposition:

**Proposition:** *Let , and suppose that for every unital representation , *

*. *

*Then . *

Noting that

- Representations of are precisely the representations of that annihilate , and
- Representations of are precisely point evaluations at points , thus
- Representations of are precisely points in ,

we see that if we prove the proposition, we obtain that it means precisely that if then , which the direction of the Nullstellensatz that we need to prove.

Thus our goal is to prove the proposition.

**Step II:** **A refresher on localization. **

We shall require the notion of a localization of a ring. Let be a commutative ring with unit (any ring we shall consider henceforth will be commutative and with unit) and let be a maximal ideal in . Define (the **complement **– not quotient – of in ). The * localization of at *is a ring that is denoted as (or ) that contains “a copy of ” and in which, loosely speaking, all elements of are invertible. Thus, still loosely speaking, the localization is the ring formed from all fractions where and .

More precisely, is the quotient of the set by the equivalence relation

if and only if .

Sometimes the pair is written as , and then multiplication is defined such that addition and multiplication are defined so as to agree with the usual formulas for addition and multiplication for fractions, that is,

,

and

.

We define a map by . Clearly, is the unit of , and is again a commutative ring with unit.

We shall require the following two facts, which can be taken as exercises:

**Fact I:** The localization of at a maximal ideal is a *local ring*, that is, it is a ring with a unique maximal ideal.

**Fact II:** If is such that for every maximal ideal in , then .

As we briefly mentioned in Fact I above, we remind ourselves that a * local ring* is a ring that has a unique maximal ideal. A commutative ring is said to be

We shall also require the following theorem, which is not really an exercise. If is an ideal in a ring , we write for the ideal generated by all elements of the form , where for all .

**Krull’s intersection theorem:** *Let be a commutative Noetherian local ring with identity. If is the maximal ideal in , then *

*.*

Take it on faith for now (or see Wikipedia).

**Step III: A lemma on local algebras. **

Recall that a ring is said to be a –* algebra *(

**Lemma: ***Let be a local -algebra with a maximal ideal , and fix . Suppose that for every homomorphism , *

*. *

*Then . *

**Proof:** First, note that , because the quotient is isomorphic to , so must be mapped to zero under this map. Since is Noetherian, is finitely generated, as is also every power . It follows by induction that for every , the algebra is a finite dimensional vector space. Hence the quotient map can also be considered as a finite dimensional representation, so it annihilates . Thus for all . By Krull’s intersection theorem, .

**Step IV and conclusion: proof of the proposition. **

We now prove the above proposition, which, as explained in Step I, proves the free commutative Nullstellensatz. Let be an ideal in , and let be an element such that for every representation of . We wish to prove that , or equivalently, that . By Fact II above, it suffices to show that for every maximal ideal in .

Now let be any maximal ideal in . By the lemma of Step III (which is applicable, thanks to Fact I), if and only if its image under every representation of is zero. But every representation gives rise to a representation , which, by assumption, annihilates . It follows that for every maximal in , whence (Fact II) , and as required. That concludes the proof.

**Remark:** The proof presented here is from my paper with Guy and Eli. I mentioned above that the theorem follows from the results in a paper of Eisenbud and Hochester. Our proof is simpler then theirs, but they prove more: our result says that if for every -tuple of commuting matrices that annihilate , where in principle one might have to consider all . Eisenbud and Hochester’s result implies that there exists some (depending on of course) such that, if for all of size less than or equal to , then . (If you are asking yourself why we are proving in our paper a weaker result then one that already appears in the literature, let me say that this theorem is a rather peripheral result in our paper, and serves a motivational and contextual purpose, rather than supporting the main line of investigation).

We now treat the Theorem (the free commutative Nullstellensatz) in the case of one variable. This really should be understood by everyone. The short explanation is that matrix zeros of a polynomial determine not only the location of the zeros but also their multiplicity.

Let , and let . So we know that for every square matrix that annihilates (that is, every such that ). Our goal is to understand why this is equivalent to belonging to the ideal generated by . One direction is immediate: if , and , then .

In the other direction, we need to show that if , then is a factor of for all . Everything boils down to understanding how polynomials operate on Jordan blocks. Consider a Jordan block

,

and consider the polynomial . Then one checks readily:

- is invertible if and only if .
- if and only if and .

It follows (assuming the form ) that if and only if for some , and . Since every matrix has a unique canonical Jordan form (up to a permutation of the blocks), we can understand precisely what matrices belong to : it is those matrices whose Jordan blocks have eigenvalues in the set , each of whose sizes are no bigger than .

So, if , then for every Jordan block for which (and vice versa). So letting , we see that must be a factor of , that is, has the form . Since this holds for all , we have that .

**Remark:** Note that the proof also shows that to conclude that , one needs to know only that for all of size less than or equal to for .

The beautiful theorem we proved raises two important questions:

- Why is it interesting (besides the plain reason that it is
**evidently interesting**). What questions does this kind of theorem help to answer? - What does the set of commuting tuples of matrices look like? In order for the above theorem to be “useful” we will need to understand this set well.

I hope to write two posts addressing these issues soon.

**Added April 23: **

**Remark: **I should also mention the following very well known observation, which also explains how evaluation on Jordan blocks can identify the zeros of a polynomial including their multiplicity. If is a Jordan block:

,

and is an analytic function, then

.

This gives another point of view of the free Nullstellensatz in one variable.

]]>Click here to download the journal version of the paper

Of course, if you don’t click by May 26 – don’t panic! We always put our papers on the arXiv, and here is the link to that. Here is the abstract:

**Abstract.** For every convex body , there is a minimal matrix convex set , and a maximal matrix convex set , which have as their ground level. We aim to find the optimal constant such that . For example, if is the unit ball in with the -norm, then we find that

.

This constant is sharp, and it is new for all . Moreover, for some sets we find a minimal set for which . In particular, we obtain that a convex body satisfies only if is a simplex.

These problems relate to dilation theory, convex geometry, operator systems, and completely positive maps. For example, our results show that every -tuple of self-adjoint contractions, can be dilated to a commuting family of self-adjoints, each of norm at most . We also introduce new explicit constructions of these (and other) dilations.

]]>The first time that I met him was in the summer of 2009, in a workshop on multivariable operator theory at the Fields Institute in Toronto. I walked up to him and asked him what he thought of some proposed proof of the invariant subspace problem (let’s say that I don’t remember exactly which one), and he didn’t even want to hear about it! At the time I was still rather fresh and didn’t understand why (I later learned that he has had his fair share of checking failed attempts). After this first encounter I thought for some time that he was a scary person, only to discover slowly through the years that he was actually a very very generous, gentle, and kind person. And he was very sharp, *that *was really scary.

The last time that I met him, it was the spring of 2014, and we were riding a train from Oberwolfach to Frankfurt. I think we were Douglas, Ken Davidson, Brett Wick and I. Davidson, Wick and I were going to Saarbrucken, and Douglas was supposed to be on another train, but he joined us because by that time he was half blind and thought that it was better to travel at least part of the way with friends. (The conductor found him out, but decided to let the old man be). At some point we had to switch trains and we left him and I was worried how can we leave a half blind man to travel alone (he made it home safely). On the train he talked about the corona theorem something, and I was sitting on the edge of my sit trying to keep up. I don’t remember what he said about the corona theorem, but I remember clearly that he told me that I shouldn’t have nausea because it is only psychological (you see, even very smart people occasionally say silly things). He also talked about black jack. That was the last time I saw him.

When I was a postdoc I became obsessed with the Arveson-Douglas conjecture, and I worked on this conjecture on and off for several years (see here, here, here and here for earlier posts of mine mentioning this conjecture). That’s one way I got to know some of Douglas’s later works. Douglas motivated many people to work on this problem, and was also responsible for some of the most recent breakthroughs. Just last semester, in our Operator Theory and Operator Algebras Seminar at the Technion, I gave a couple of lectures on two of his very last papers on this topic, which were written together with his PhD student Yi Wang: “Geometric Arveson-Douglas Conjecture and Holomorphic Extension” and “Geometric Arveson-Douglas Conjecture – Decomposition of Varieties“. These are very difficult papers, written with a rare combination of technical ability and vision.

By the way, I have heard wonderful things about Douglas as a mentor and PhD supervisor. In July 2013 I attended a conference in Shanghai in honour of Douglas’s 75th birthday. At the banquet many of his students and collaborators got up to say some words of thanks and to tell about nice memories. After several have already spoken, the master of ceremony walked up to me with his wireless microphone and announced: “and now, to close this evening, the *last student, Piotr Nowak*!” Perhaps this is a good place to point out that I was not Douglas’s student, nor is my name Piotr Nowak (I think Piotr Nowak also was not a student, but he was a postdoc or at least spent some time at Texas A&M). I took the mic in my hand, but didn’t have the guts to play along, and handed it over to Piotr.

(I wrote above that I was not a student of Douglas, but in some sense I am his mathematical step-grandchild. Douglas’s first PhD student was Paul Muhly, who is mathematically married to Baruch Solel, my PhD supervisor, hence is my mathematical step-father.)

Another completely different work of his that I had the pleasure of studying is his beautiful little textbook “Banach Algebra Techniques in Operator Theory“, which I read cover-to-cover with no particular purpose in mind, just for the joy of it.

I think that perhaps Douglas’s greatest contribution to mathematics is the Brown-Douglas-Fillmore (BDF) theory. The magic ingredient of using sophisticated algebraic and topological intuition and machinery appears in much of Douglas’s work, but in BDF it had wonderful consequences as well as incredible impact. If one wants to get an idea of what this theory is about (and what kind of problems in operator theory motivated it), perhaps the best person to explain is Douglas himself. To this end, I recommend reading the introduction to Douglas’s small book on the subject, “C*-Algebra Extensions and K-Homology” (Annals of Mathematics Studies Number 95).

[**Update, March 17th:** I later checked my records and realised that the way I remembered things is not the way they were! I am leaving the memory as I wrote it, but for the record, that train ride was * not *the last time that I saw Douglas, I suppose that it was simply the most memorable and symbolic goodbye. The last time I met Douglas was in Banff, in 2015. (In my memory, I mixed Oberwolfach 2014 with Banff 2015). If I am not mistaken, he was there with his wife Bunny, and we did not interact much. I met him four other times: in June 2010 at the University of Waterloo, when he received a honorary doctorate, later that summer in Banff, at IWOTA 2012 which took place at Sydney, and at IWOTA 2014, in Amsterdam (which was also after our goodbye on the train). ]

]]>

Another question that continues to puzzle me (and to which I still don’t have a complete answer to) is: *why do I continue to inflict upon myself the tortures of international travel, such as ten hour jet lag or trans-atlantic flights?* More generally, I spent a lot of time wondering: *why do I continue going to conferences? Is it worth it for me? Is it worth the university’s money? Is it worth it for mankind? *

Last week I attended the Joint Mathematics Meeting in San-Diego. It was my first time in such a big conference. I will probably not return to such a conference for a while, since it is not so “cost effective”. I guess that I am a small workshop kind of person.

I spoke in and attended all the talks in the Free Convexity and Free Analysis special session, which was excellent. Here is the abstract and here are the slides of my talk (the slides). I also attended some of the talks in the special sessions on Advances in Operator Algebras, Operators on Function Spaces in One and Several Variables, and another one on Advances in Operator Theory, Operator Algebras, and Operator Semigroups*. *I also attended several plenary talks, which were all quite entertaining.

I am happy to report that the field of free analysis and free convexity is in really good shape! There was a sequence of talks in the first day (Hartz, Passer, Evert and Kriel) by three very young researchers on free convexity that really put me into high spirits! The field is blossoming and the competition is healthy and friendly. But the talk that got me most excited was the talk by Jim Agler, who gave a preliminary report on joint work with John McCarthy and Nicholas Young regarding noncommutative complex manifolds. Now, at first it might seem that nc manifolds will be hard to make sense of, because how can you take direct sums of points in a manifold, etc. Moreover, the only take on the free manifolds that I met before was Voiculescu’s construction of the free projective plane, which I found hard to swallow and kind of ruined my appetite for the subject.

However, it turns out that one can define a noncommutative complex manifold as topological space that carries an atlas of charts where is an open subset of and is a homeomorphism from an nc domain onto , such that given two intersecting charts , the map going from to is an nc biholomorphism. **This definition is so natural and clear that I want to shout! **Agler went on and showed us how one can construct a noncommutative Riemann surface, for example the Riemann surface corresponding to the noncommutative square root function. How can one **not** want to hear more of this? I am looking forward very enthusiastically to see what Agler, McCarthy and Young are up to this time; it looks like a very promising direction to study.

Among the plenary talks that I attended (see here for description), the one given by Avi Wigderson struck me the most. I went to the talk simply for mathematical entertainment (a.k.a. to broaden my horizons), but I was very pleasantly surprised to find completely positive maps and free functions in a talk that was supposed to be about computational complexity. I went to the first two talks but missed the third one because I had an opportunity to have lunch with a friend and collaborator, which in any respect was more important to me than the lecture. The above link (here it is again) contains links to a tutorial and papers related to Wigderson’s talks, and I hope to find time to study that, and at least catch up on what I missed in the third talk.

One more thing: there was one quite eminent operator theorist who is long retired, and came to several of the sessions that I attended. At some point I noticed that after every talk a came up to the speaker and said several words of encouragement or advice. Seeing such a pure expression of kindness and love of humanity was touching and inspiring. Upon later reflection, I noticed that such expressions were happening around me all the time, for example when another “celebrity” in our field arrived and a hugging (!) session began. This memory brings a smile to my face. Well, maybe going to San-Diego was worth it, after all.

**Additional thoughts January 26: **

- The tutorial that you can find in “the above link” seems to cover all of Wigderson’s talk.
- I have had some more thoughts on “big conferences”. The good thing about them is that it gives an opportunity to interact with people people outside one’s own academic bubble, and attend high level talks by prominent mathematicians. The bad thing is that you fly far away, waste tons of grant money, and in the end have only a small time to discuss your research topic with experts. So: to go or not to go? I’ve found a solution! Attend
**local**big conferences. Fly across the world only to meet with special colleagues or participate in focused and effective workshops or conferences on your subject of main interest. (And if they invite you to give a plenary talk at the ICM, then, OK, you should probably go).

Here are a few of links that I read and on which I base this post: an obituary by John Baez, with some links, including to an account by himself of the origins of “univalent foundations”, and also this obituary on the IAS site.

Here I want to write about several aspects of Voevodsky’s story which struck me. Note, it is written from the point of view of a mathematician who has not studied his work at all. I surely am not qualified to give an account of his development of motivic cohomology and his solution of Milnor’s conjecture, achievements for which he won the Fields Medal, nor the development of *Univalent Foundations* or *Homotopy Type Theory *(though I am certainly determined to read the first chapters in the book on homotopy type theory *whenever I find the time*)*.* What really drew my attention in what I read about Voevodsky is the human story of a mathematician and his struggle. It is a story that can be understood by “human-level-IQ mathematicians” – in fact by any person – and it raises some disturbing and disheartening issues. Beyond the human story, there is the story of mathematics – our fractal and fragile profession, which at times seems to be standing on firm ground, and at times seems to be hanging on thin air.

Here are some key parts the story, brutally retold. (The quoted texts below here taken from this account by Voevodsky. The personal information is from the Wikipedia page or the obituaries linked above.)

**The existential nightmare. **Voevodsky apparently did not finish his undergraduate studies at Moscow State University (wiki says that he “flunked”!). However, as a first year undergrad he started reading a manuscript of Grothendiek’s and since then tried to develop his own mathematical ideas. He met Michael Kapranov and together they published a paper “-Groupoids as a Model for Homotopy Category”, where they “claimed to provide a rigorous mathematical formulation and a proof of Grothendieck’s idea…”. Based on this exceptional achievement (presumably), Kapranov arranged for Voevodsky to be accepted to Harvard graduate school (Voevodsky did not apply, and didn’t even know that this was being arranged!) where he worked under the supervision of David Kazhdan. He continued to do outstanding work, and went on to solve famous conjectures, get appointed to the Institute of Advanced Studies, and win the Fields Medal.

What a romantic story! But Voevodsky tells us what happened later:

In October 1998, Carlos Simpson submitted to the arXiv preprint server a paper called “Homotopy Types of Strict 3-groupoids.” It claimed to provide an argument that implied that the main result of the “∞-groupoids” paper, which Kapranov and I had published in 1989, cannot be true. However, Kapranov and I had considered a similar critique ourselves and had convinced each other that it did not apply. I was sure that we were right until the fall of 2013 (!!).

Voevodsky is telling us that his first paper, which boosted his stellar career, turned out to be flawed – the main result was not true! Moreover, he was not able (maybe it was an emotional block, maybe too much work) to settle the issue of who is right for 15 years!! The horror of this situation is unbearable. Or maybe it is not so horrifying – maybe at times he did not care any more, not enough to resolve it?

And another question comes to mind: what if he found his mistake, when he was writing the paper? What if he could not fix it (it was not fixable), and gave up on mathematics? So, should we, should he, be happy that he made this mistake? He also says that Kapranov and he considered this critique, but convinced themselves that it did not apply. Well, what if they still had doubts? Would ignoring these doubts have been the right thing to do? Was scratching the paper the right thing to do? But then maybe there would never have been an arrangement to have Voevodsky study at Harvard, maybe he would have not continued his mathematical pursuits.

Does it make any difference if a paper on -groupoids is correct or not? If a result is proven in a paper, and nobody ever finds the mistake, is it as good as true? If a person got a job, or tenure, on the basis of wrong paper – should he be dismissed? If you write a paper, and find a big mistake, should you withhold the information until the situation gets clearer? After all, its not your fault that you were even more diligent than Voevodsky, and found *your own* mistake, is it?

**The referee’s concerns. **But these are not the only mistakes coming up in this story. Voevodsky tells:

The field of motivic cohomology was considered at that time to be highly speculative and lacking firm foundation. The groundbreaking 1986 paper “Algebraic Cycles and Higher K-theory” by Spencer Bloch was soon after publication found by Andrei Suslin to contain a mistake in the proof of Lemma 1.1. The proof could not be fixed, and almost all of the claims of the paper were left unsubstantiated.

A new proof, which replaced one paragraph from the original paper by thirty pages of complex arguments, was not made public until 1993, and it took many more years for it to be accepted as correct. Interestingly, this new proof was based on an older result of Mark Spivakovsky, who, at about the same time, announced a proof of the resolution of singularities conjecture. Spivakovsky’s proof of resolution of singularities was believed to be correct for several years before being found to contain a mistake. The conjecture remains open.

The approach to motivic cohomology that I developed with Andrei Suslin and Eric Friedlander circumvented Bloch’s lemma by relying instead on my paper “Cohomological Theory of Presheaves with Transfers,” which was written when I was a Member at the Institute in 1992–93. In 1999–2000, again at the IAS, I was giving a series of lectures, and Pierre Deligne (Professor in the School of Mathematics) was taking notes and checking every step of my arguments. Only then did I discover that the proof of a key lemma in my paper contained a mistake and that the lemma, as stated, could not be salvaged. Fortunately, I was able to prove a weaker and more complicated lemma, which turned out to be sufficient for all applications. A corrected sequence of arguments was published in 2006.

What’s going on? So many flawed papers. Makes one wonder *who were the charlatans who refereed these papers and accepted them for publication*. Of course, I am kidding. It really makes one wonder: *am I, as referee, accepting flawed paper after flawed paper? *Doesn’t it happen to all of us that we review a paper, it is a hard and technical paper, and then there is this lemma, which we can *convince* ourselves is true, but is it really true? It would be really hard to get to the bottom of this, and the other parts of the paper seem fine, and it is Voevodsky, mind you, who is author… I don’t really have time to check each and every lemma in this paper! It’s not my job! Can we let just this lemma pass? In fact, maybe we should, we do not want to block the next Voevodsy?

**The working mathematician’s toil. **If there are these truly important papers out there, by the leaders of our field, that are flawed, some of them even dead wrong, then what is the meaning of all this? Maybe there are more wrong papers, and nobody ever noticed? Does it even matter? Should I quit my job and become a carpenter, build real thing? Voevodsky says:

But to do the work at the level of rigor and precision I felt was necessary would take an enormous amount of effort and would produce a text that would be very hard to read. And who would ensure that I did not forget something and did not make a mistake, if even the mistakes in much more simple arguments take years to uncover?

To me, the most inspiring part of Voevodsky’s story, is the way that he chose to handle the crisis that he observed mathematics is in. First of all, he honestly admitted that there is a problem, and he decided to confront it.

And it soon became clear that the only long-term solution was somehow to make it possible for me to use computers to verify my abstract, logical, and mathematical constructions.

But his pursuit for truth was much deeper than a superficial slogan “use computers”:

The primary challenge that needed to be addressed was that the foundations of mathematics were unprepared for the requirements of the task. Formulating mathematical reasoning in a language precise enough for a computer to follow meant using a foundational system of mathematics not as a standard of consistency to establish a few fundamental theorems, but as a tool that can be employed in everyday mathematical work.

And so he undertook the herculean task of developing new foundations for mathematics !! (of course, not alone). Could this enormous pressure, coming from within, from his own intellectual honestly, be what drove him to a breakdown? Probably, but not much information is given in the obituaries, since this stuff is very personal. Is it possible that it is not this pressure that led to his death at the very young age of 51?

**The graduate student’s ordeal. **

So let us return to the paper “Cohomological Theory of Presheaves with Transfers”, a very important paper which was certainly studied in many seminars worldwide. And imagine the budding graduate student, who doesn’t understand this lemma.

“Can you explain this?” he asks, and everybody volunteers to explain: “think of it this way…” says the veteran grad student, “it’s like bla bla blab la” and waves his hands. “Well, I see why its *morally* right”, says the budding one, “but I don’t understand the proof…”. Some others try to help, while the graduate starts to regret having asked. The postdoc moves uneasily in her chair. “What a waste of time!” she thinks, to be explaining lemmas to graduates students, and encourages: “I also had problems understanding that one, it’s one of those things that you have to work out on your own”. The supervisor recalls vaguely that he too had to work to understand that lemma (in fact, it was when he refereed the paper!) and that one could fix it somehow…. “um, technically I am not sure that this is precise, but we don’t really need the full power of the lemma, um, one can fix it” he says “now how did that go?”. Everybody waits “You know what, I’ll have to check my notes, why don’t we assume the lemma now and proceed”. And the budding graduate student is left with the feeling that everyone here is a clown except himself, or alternatively that everyone here is genius except himself, and maybe it isn’t that difficult and perhaps all this is not for him…

**The tired mathematician’s worry. **

So at the end, Voevodsky and many other mathematicians have set off to develop new foundations for mathematics, which, among other things, might make it easier to use computers to check proofs. Is this a good development? Is it necessary? Will it really help?

(If they can check our proofs, maybe the computers can do research on their own? Maybe they can also read one another’s papers. Imagine a world where all research mathematician are actually computers: how different would that world be? )

Do I have to invest myself in learning these new foundations? Should I wait? Maybe my field requires a different foundations and a different computer system to check it – is it a good idea to pursue these ideas?

Maybe all that “univalent nonsense” is important only if you want to work on Grothendiek-style shenanigan. If you do honest mathematics that actually relates to reality, you’re probably on safe ground and have nothing to worry about. Should I be worried?

*** * * * ***

To be honest, I am not very worried. I am split between being two opposite opinions. On the one hand, I am somewhat angry and disappointed at Mathematical Culture for **not** putting enough emphasis on correctness and understanding. It is clear that different people have very different notions of “understanding” and “knowing”. On the other hand, I think that mistakes are part of life, and also part of science, and therefore can and should be permitted be part of mathematics. These things happen, and by a process of mutation and selection, we hopefully evolve. (In passing, a request: please post corrigenda to your papers/books/etc.; perhaps math will move on without the corrigendum, but at least you can help that budding graduate student survive grad school in one piece).

And although I am very curious about univalent foundations, I cannot learn it in any deep way without stopping everything I am doing, and this won’t happen (one of the reasons why it won’t happen is that I am very skeptical). The details of Voevodsy’s mathematics, I feel, are not the important part of the story. The heart of the story is the determination to follow truth according to one’s standards and convictions, which is relevant far beyond mathematics, and which everyone can follow within their limitations. And maybe in this story there is also is a warning, or a calling, that those who come too close to the light, might burn.

]]>Several months ago I informed both MathSciNet as well as Zentralblatt that I would like to stop reviewing papers for these repositories. If you don’t know what I am talking about (your PhD thesis advisor should be fired!), then MathSciNet and Zentralblatt are databases that index published papers in mathematics, contains some bibliographic information (such as a reference list for every paper, as well as a list of papers that reference it), and, significantly, has a review for every indexed paper. The reviews are written by mathematicians who do so voluntarily (they get AMS points or something). If the editors find nobody willing to review, then the abstract appears instead of a review. This used to a very valuable tool, and is still quite valuable.

I quit because:

- I don’t have time for the voluntary work for free. This doesn’t mean that I don’t do any voluntary work for free – but since I don’t have time for this I have to be very picky about what voluntary work I do.
- This service is very useful for old papers that are hard to get a hand on, or that are written in a language that is not English. It used to be a very good way to stay up-to-date with works in the field. Today, the standard is that almost all papers are written in English and are available freely online. The actual added value of having this external review available is significantly lower than it used to be.
- I think that we, as a community, are not doing a good enough job of refereeing papers (I feel this as referee, author, and now also as an editor). I think that if we have some time that we are going to spend volunteering for reviewing papers, we shouldn’t split it up between refereeing and reviewing for databases. We should concentrate on refereeing, which is a crucial part of the mathematical eco-system, and not waste it on reviews, which are in a large part redundant.
- Reviewing papers has advantages also for the reviewer: it can discipline and focus the reviewer for staying up-to-date and working through current papers. However, in the current system, the papers are reviewed
**after they appeared in print (or online)**. This is ridiculously late. I do like to review papers some time, but the appropriate time to do this is after they appear as preprints on the arxiv. Then I can use my blog to post these reviews. Yes, this is not a standard platform, but nothing is perfect.

]]>

This is post is reply to (part of) a post by Scott Aaronson. I got kind of heated up by his unfair portrayal of the blog “Stop Timothy Gowers!!!“, and started writing a reply which got to be ridiculously long, so I moved it here.

Dear Scott,

I think that, as others remarked in the comments, you unfairly portray sowa’s blog. It is much more than just a rant against Gowers, and contains some “positive” contributions (agreed, the “positive” ones are mostly historical/philosophical/other and not Gowers-style exposition, so what?). But even if it was true that that blog just had “negative” comments, I think it has a place. Here are some points to consider.

(Before the points, this is written in defense of sowa, and not in damnation of Gowers. I have never met either, I didn’t read their papers, I don’t agree with everything sowa said about Gowers, and I am willing to bet that Gowers is a very nice guy and a gentleman.)

**1) “Lack of exposition” I.** You wonder why doesn’t sowa for once take a break from discussing (say) the epochal greatness of Grothendiek, and “walk us through examples”. Well, he uses his blog to write about things he cares about. For serious mathematics he has others outlets. He wants to discuss the politics of mathematics, and he wants to oppose the what he sees as the current trends and power structure. There are politics in mathematics and there are power structures, fads, trends, celebrities, etc. These things affect the development of mathematics, where people go, where the money goes. These are totally legitimate issues to address.

**2) “Lack of exposition” II.** The kind of blogging that tries to teach some mathematics, expose it in a simplified way that non-experts can understand, is very difficult to do. I try to do it on my own blog, and honestly, I sometimes wonder whether the piece I wrote has any value at all. It happens (to me, and maybe also to you) that by the time you reach the beef, you run of breath, or out of time, or you realize that you cannot do this technical part any better than original paper or book that you linked to. And as a reader, when reading expositions on certain blogs or expository journals (or colloquium talks) I sometimes say to myself: the author really tried to walk me through this piece of mathematics/science or through their thought process, but unfortunately was unsuccessful in conveying any substantial information. So I can totally understand a blogger who feels that writing these friendly expository pieces is not useful, and spend no time on that.

**3) Symmetry.** You mention that there is asymmetry between them: Gowers writes about math, and sowa writes about Gowers. Well, you are right, there really isn’t symmetry: Gowers is at the center, and sowa is peripheral. Gowers has power and influence, and sowa thinks that Gowers has too much. So it is ridiculous to point out that sowa is just complaining and not talking math, and that Gowers isn’t wasting time complaining about politics. When it comes to the power structure in mathematics, Gowers doesn’t have much to complain about (although, being human, he does actually complain and rant on his blog, when the issues are not the ones where he happens to be up).

I want to emphasize a fallacy you have made: You point to the asymmetry as an answer to a question you raise: “How could a neutral observer possibly decide who was right?” (You mean, if the neutral observer didn’t care to weigh the actual statements made?) Interesting question, but your answer seems all wrong to me. The person complaining might have a strong point – that’s why he is so upset! – and the person not complaining might be comfortable enough.

**4) The three cultures in this discussion. **Scott, you are an American, watching from the side an exchange between an (apparently) Eastern European raised mathematician and an English mathematician. To you, it might seem like the first is shouting, and the second is being the most polite and maybe even gallant person ever. These differences in culture can distract from the actual points made. So the best thing to do would be to concentrate on the points themselves, and not on the volume.

**5) The point of the matter I.** As pointed out by eminent mathematicians, there is a certain periodic movement in the mainstream culture of mathematics, between the abstract and theoretical developments, on the one hand, and more concrete, problem-driven work, on the other. Very roughly speaking, Sowa on his blog advocated a certain style of mathematics, or a certain way of doing mathematics, which he felt was the best one. His point of view on what is good mathematics can be summarized in one word: “Grothendiek”. He very often used Gowers as an example of bad trends in mathematics, giving arguments against points-of-view publicized by Gowers. But in the beginning that blog did not look like a crusade against Gowers, and had the pleasant name: “Notes of an owl”. Sowa was just another force affecting the perpetual periodic motion in mathematical philosophy.

If I get the story, sowa really lost his top when it became known that Gowers would be presenting the work of Abel prize winner Pierre Deligne (and as his blog says, that’s when he changed the title of his blog to the current one). He stated his opinion that Gowers is unqualified to speak about Deligne’s work. Is it unacceptable to raise such a point? I think that it is (though I am in no way competent to answer the question of what Gowers is capable of). He also made a point that it was the third time in a row that Gowers was chosen to present the life work of an Abel prize winner. This is an even more valid point to make.

I know that everybody says that Gowers is a brilliant expositor. Well, I also saw a video of the lecture “The importance of mathematics” by Gowers and it was, indeed, a wonderful talk. I recall thinking that it is one of the best lectures I saw in my life (and for sure the best one that I saw on video). So I am convinced that he has the capability of expositoring exquisitely. But nobody is perfect, and no-one irreplaceable.

I stopped reading Gowers’s Blog some time ago, but there was a time that I tried to read a lot. I know what people are talking about when they speak of his posts as an intellectually honest journey, where he takes you by the hand and leads you through his thought process; I know what they are talking about, but I interpret this “leads you through his thought process” as lazy writing. Reading some of his old posts I got the notion that he hasn’t thought it all out before writing, and that his “delete” button is broken. Now, I don’t want to go and search for the old posts that I read and did not like (as Gowers once said: “I don’t have the information at the tip of my fingers”…), I am not out to prove that Gowers is a **bad** expositor, of course he isn’t; my point is just that different people might find different styles of expositoring appealing or useful. So the question, whether it is correct to have the same person present the prize three times in a row is, seems to me to be right on. And maybe, if you liked the style of the guy who did it the first time, then you wouldn’t have raised that question “is he the right person”, when he was chosen for the third time in a row. But once the question is raised, you cannot ignore it just because it is kind of rude to ask it.

**6) The point of the matter II****. **Another harsh criticism of sowa on Gowers (too harsh, I think, but basically right) is on the matter of publishing in mathematics. It is ironic that one of the good things that you (Scott) have to say about Gowers is that “He’s also been a leader in the fight to free academia from predatory publishers”. Google “predatory publishers”, I don’t think it means what you think it does. Indeed he played a creditable role as a leader in the boycott against Elsevier (about which I had doubts, I won’t go into that). But Gowers, in my opinion, abused his reputation and played a very dangerous role in actually **vindicating** predatory publishers, when he helped to set up Gold Open Access journals (see also this). In his defense, he seems to be very thoughtful and careful about these matters, is aware of the dangers, and has also later set up an arXiv overlay journal. Sowa has a lot to say on this matter, and here too, and I agree with some of the points he makes.

Recall, that by Sz.-Nagy’s dilation theorem, given contraction acting on a Hilbert space , one can always construct a unitary acting on a Hilbert space , such that

(*) ,

(Here denotes the orthogonal projection of onto .) The operator is called a **unitary dilation** of . This simple theorem is the starting point of a ton of developments in operator theory on Hilbert spaces.

In the setting of operator on Banach spaces, we say that that an operator acting on a Banach space **has a dilation**, if there exists a Banach space , an invertible isometry , and two contractions and , such that

(**) ,

It is quite easy to see that if both and are Hilbert spaces, then this boils down to the definition (*). Moreover, invertible isometry seems like the right generalization of unitary, and examining (**) for , we see that must be isometric, and is the projection onto . In this setting it is understood that we are looking for invertible isometric dilations, and no adjective is used alongside the word “dilation”. (Other kinds of dilations can also be considered, i.e., one can search for a positive dilation, etc.) Note that for an operator to have a dilation it must be a contraction, and we shall always understand that operators for which we seek a dilation are contractions.

One very simple thing I learned from this paper is that the existence of a dilation for every contraction in the setting of **all** Banach spaces is a ridiculously trivial matter: one just constructs , (the bounded functions ), defines

,

(where the is in the th place), one lets be the left shift, and be the projection onto th summand. (A similar construction is given in the paper, using .) The key point of this paper is that this might not be very helpful unless shares with some regularity properies, such as being a Hilbert space, reflexivity, being an space on a finite measure space, etc. For example, if one wants to remain in the realm of Hilbert spaces, the above construction does not work, and one needs to proceed differently (the usual proof of the dilation theorem in Hilbert spaces (see Wikipedia) uses the existence of a square root; basic, but not trivial). In this post we will always understand that the we seek is to be chosen from within a well defined class of Banach spaces.

The authors don’t concentrate on the problem of finding a dilation for a single operator. They treat a more general problem, and this generality is actually a key to their proof. They make the following definition:

**Definition: **Let be a class of Banach spaces and let . A set of bounded operators on , say , is said to have a **simultaneous dilation **(in ), if there exists a and a set of invertible isometries , together with contractions and , such that

for all and all .

The main theorem is as follows (Theorem 2.9 in the paper):

**Theorem: ***Suppose that is a family of reflexive Banach spaces, that is closed under finite direct sums (for some fixed ) and closed under ultra-products. If is a family of bounded operators on has simultaneous dilation in , then so does the weak-operator closure of the convex hull of . *

For example, the family of all unitaries on a Hilbert space have simultaneous dilation (trivially). Since the weak operator convex hull of unitaries contains all contractions, we find that all contractions on a Hilbert space have simultaneous dilation (here we used the Theorem in the case where is the class of all Hilbert spaces, and ).

The existence of a simultaneous dilation for all contractions on a Hilbert space is only epsilon harder than Sz.-Nagy’s dilation theorem, and is brought just to illustrate. A more interesting example is that positive invertible isometries on are weak-operator dense in the set of all positive contractions, we get that the set of all positive contractions on has simultaneous dilation. The paper doesn’t exhaust all the dilation possibilities that it opens up (I guess that is why it is called a “toolkit”), and the authors suggest that the methods could be used in other situations; for example, maybe it can be used to find -endomorphic dilations to CP maps on C*-algebra.

Two very nice surprises were:

- I learned of an application of N-dilations (see this link overview of the notion in the context of a single or commuting operators on Hilbert spaces). In fact, N-dilations seem to be essential for the proof. The authors prove that a family has simultaneous dilation if and only if it has simultaneous N-dilation for every N (this is similar to a Theorem 1.2 from this paper (in a slightly different setting), but curiously there the easy direction was the direct implication. I wonder if the reverse implication there could also be proved with ultra products…).
- I found references to several earlier work regarding dilations (even N-dilations), unfortunately, a couple of them are in languages that I cannot read. In particular, I learned that the existence of dilations in the context of spaces allows to obtain pointwise ergodic theorems in spaces, as in this paper of Akcoglu and this paper of Akcoglu and Shucheston (I knew that Sz.-Nagy’s unitary dilation quickly reduces the mean ergodic theorem for contractions in to von Neumann’s mean ergodic theorem for unitaries, which is rather basic given the spectral theorem; however, the mean ergodic theorem for contractions in Hilbert spaces has a very elegant proof, it is not much different from von Neumann’s original proof, if I’m not mistaken. Pointwise ergodic theorems are harder, and is the easiest, so this is a far better application, even in the case, than what I was aware of).

I decided to read this book primarily because I like to read the books I have, but also because I am teaching graduate functional analysis in the coming semester and I wanted to amuse myself by toying with the possibility of de-emhasizing Banach spaces and giving a more general treatment that includes topological vector spaces. I enjoyed thinking about whether it can and/or should be done (the answers are ** yes** and

Oh sister! I was pleasantly surprised with how much I enjoyed this book. They don’t write books like that any more. Published in 1964, the authors follow quite closely the tradition of Bourbaki. Not too closely, thankfully. For example they restrict attention from the outset to spaces over the real or complex numbers, and don’t torture the reader with topological division rings; moreover, the book is only 158 pages long. However, it is definitely written under the influence of Bourbaki. That is, they develop the whole theory from scratch in a self-contained, clean, efficient and completely rigorous way, working their way from the most general spaces to more special cases of spaces. Notions are given at the precise place where they become needed, and all the definitions are very economical. It is clear that every definition, lemma, theorem and proof were formulated after much thought had been given as to how they would be most useful later on. Examples (of “concrete” spaces to which the theory applies) are only given at the end of the chapters, in so called “supplements”. The book is rather dry, but it is a very subtly tasty kind of dry. The superb organization is manifested in the fact that the proofs are short, almost all of them are shorter than two (short) paragraphs, and only on rare occasion is a proof longer than a (small) page. There is hardly any trumpet blowing (such as “we now come to an important theorem”) and no storytelling, no opinions and no historical notes, not to mention references, outside the supplement. The authors never address the reader. It seems that there is not one superfluous word in the text. Oh, well, perhaps there is *one* superfluous word.

After the definition of a **precompact set** in a (locally convex) topological vector space, the authors decided to illustrate the concept and added the sentence *“Tapioca would make a suitable mental image”*. This happens on page 49, and is the first and last attempt made by the authors to suggest a mental image, or any other kind of literary device. It is a little strange that in this bare desert of topological vector spaces, one should happen upon a lonely tapioca, just one time…

* * * * *

So, why don’t people write books like that any more? Of course, because this manner of writing went out of style. It had to become unfashionable, first of all, simply because old things always do. But we should also remember that mathematical style of writing is not disconnected from the cultural and philosophical surroundings. So perhaps in the 1930s and up to 1950s people could write dogmatically and religiously about mathematics, but as time went by it was becoming harder to write like this about anything.

In addition to this, it is interesting that there were also some opposition to Bourbaki, from the time not much after the project took off, and until many many years later.

Not that I myself am a big fan. I personally believe that maximal generality is not conducive for learning, and I prefer, say, Discussion-SpecialCase-Definition-**Example**-Theorem-Proof to Definition-Theorem-Proof any day. I also don’t believe in teaching notions from the most general to the more specific. For example, in my opinion, set theory should not be taught-before-everything-else, etc. For another example, when I teach undergraduate functional analysis I start with Hilbert spaces and then do Banach spaces, which is inefficient from a purely logical point of view. But this is how humans learn: first we gurgle, then we utter words, then we speak; only much later do we learn about the notion of a *language*.

So, yes, I do find the books by Bourbaki hard to use (reading about all the pranks related to the Bourbaki gang, one cannot sometimes help but wonder wether it is all a gigantic prank). But I have a great admiration and respect for the ideals that group set and for some of its influences on mathematical culture. The book by Robertson and Robertson is an example of how to take the Bourbaki spirit and make something beautiful out of it. And because of my admiration and respect for this heritage, it is a little sad to know that Bourbaki was quite violently abused and denounced.

If you have ever read some harsh and mean criticism of the Bourbaki culture, if you have heard someone try to insult someone else by comparing them to Bourbaki, then please keep in mind this. Nobody really teaches three-year-olds set theory before numbers. In the beginning of every Bourbaki book (“To the reader”), it is explicitly stated that, even though in principle the text requires no previous mathematical knowledge on the part of the reader (besides the previous books in the series) “it is directed especially to those who have a good knowledge of at least the content of the first year or two of a university mathematics course”. Bourbaki didn’t “destroy French mathematics” or any other nonsense. The source of violent opposition is not theological or pedagogical, but psychological. In my experience, the most fervent opponents of the Bourbaki tradition who I heard of, are people of non-neglible egos (and their students), who were simply very insulted to find out that a self-appointed, French-speaking(!) elite group decided to take the lead, without asking permission or inviting them (or their teachers). That hurt, and a crusade, spanning decades, ensued.

* * * * *

Well, let us return to the pleasant Robertsons. Besides the lonely tapioca, I found one other curious thing about this book. On the first page the names of the authors are written:

**A.P. Robertson**

(Professor of Mathematics

University of Keele)

AND

**Wendy Robertson**

So, what’s the deal with A.P. and Wendy? Is A.P. a man? I guessed so. Are they brother and sister? Why is he a professor and she isn’t? Are they father and daughter? I wanted to find out. I found their obituaries: Wendy Robertson (she passed away last year) and Alexander Robertson.

So they were husband and wife, and it seems that they had a beautiful family and a happy life together, many years after writing this book together. I remained curious about one thing: whose idea was it to suggest tapioca? Did they immediately agree about this, or did they argue for weeks? Was it a lapse? Was it a conscious lapse?

* * * * *

In the course that I will teach in the coming semester, I am not going to use the language of topological vector spaces. I will concentrate on Banach spaces, then weak and weak-* topologies will enter. These are, of course, topological vector spaces, but there is no need to set up the whole framework to notice this, and there is no need to prove everything in the most general setting. For example, the students will be able to prove a Hahn-Banach extension theorem for, say, weak-* continuous functionals, by imitating the proof that I will give in class in a similar setting.

On Saturday I went to my nephew’s Bar-Mitzva, and they had tapioca for desert (not bad), and I thought about Wendy and Alex Robertson. Well, especially about Wendy. I think that it was her idea.

]]>