is UCP .

A tuple is said to be * minimal* if there is no proper reducing subspace such that . It is said to be

In an earlier paper (“Dilations, inclusions of matrix convex sets, and completely positive maps”) I wrote with other co-authors, we claimed that if two compact tuples and are minimal and have the same matrix range, then is unitarily equivalent to ; see Section 6 there (the printed version corresponds to version 2 of the paper on arxiv). This is false, as subsequent examples by Ben Passer showed (see this paper). A couple of other statements in that section are also incorrect, most obviously the claim that every compact tuple can be compressed to a minimal compact tuple with the same matrix range. All the problems with Section 6 of that earlier paper “Dilations,…” can be quickly fixed by throwing in a “non-singularity” assumption, and we posted a corrected version on the arxiv. (The results of Section 6 there do not affect the rest of the results in the paper, and are somewhat not in the direction of the main parts of that paper).

In the current paper, Ben and I take a closer look at the non-singularity assumption that was introduced in the corrected version of “Dilations,…”, and we give a complete characterization of non-singular tuples of compacts. This characterization involves the various kinds of extreme points of the matrix range . We also make a serious invetigation into fully compressed tuples defined above. We find that a matrix tuple is fully compressed if and only if it is non-singular and minimal. Consequently, we get a clean statement of the classification theorem for compacts: if two tuples and of compacts are fully compressed, then they are unitarily equivalent if and only if .

]]>

I had the privilege to work with two very bright students who have recently finished their undergraduate studies: Mattya Ben-Efraim (from Bar-Ilan University) and Yuval Yifrach (from the Technion). It is remarkable the amount of stuff they learned for this one week project (the basics of C*-algebras and operator spaces), and that they actually helped settle the question that I raised to them.

I learned a lot of things in this project. First, I learned that my conjecture was false! I also learned and re-learned some programming abilities, and I learned something about the subtleties and limitations of numerical experimentation (I also learned something about how to supervise an undergraduate research project, but that’s besides the point right now).

Following old advice of Halmos, the problem that I posed to Mattya and Yuval was in the form a *yes/no* question. To state this question, we need to recall some definitions. If is an matrix, another matrix is said to be a **dilation*** *of if

In this case, is said to be a **compression*** *of . We then write . If and are tuple of matrices, we say that is a dilation of , and that is a compression of , if for all . We then write .

A -tuple is said to be **normal**** **if is normal and for all . Normal tuples of matrices (or operators) are the best understood ones, because – thanks to the spectral theorem – they are simultaneously unitarily diagonalizable.

If is an matrix, we define its * norm * to be the operator norm of when considered as an operator on , that is: (here denotes the Euclidean norm ).

**The complex matrix cube problem:** *Given a tuple of matrices, can one find a normal dilation of such that for all ? *

I had some reasons to believe that the answer is *yes*, one of which was that it was proved that the answer is *yes* if are all selfadjoint; see this paper by Passer, Solel, and myself (I reported on this paper in this previous post). Passer later proved that if we replace with then the answer is *yes* for arbitrary tuples. Passer’s proof did not look optimal to me. Also, I carried out some primitive numerical experimentation that seemed to verify that is plausible.

Suppose we are given a -tuple of contractions . We wish to know whether it is true or false that has a normal dilation such that for all (this is not exactly the way we formulated the problem above, but it can be seen to be equivalent).

The first observation is that it is enough to consider only tuples of unitaries. Indeed, if is a contraction (meaning that ) then

is a unitary dilation of . So given a -tuple of contractions, we can find a -tuple of unitaries such that . Thus, we may as well assume that is a tuple of unitaries, and ask whether we can dilate .

We considered normal tuples with joint eigenvalues at the vertices of the polytope , where is a regular polygon with vertices that circumscribes the unit disc . When is moderately large, the boundary of is very close to , and in this post I will ignore this difference (the reader can check that for the results we get, ignoring this difference actually puts us on the safe side of things).

Given a fixed tuple of unitaries , it can be shown that has a normal dilation with for all if and only if

(*)

for every matrix valued polynomial of degree one , where is the fixed normal tuple we constructed above. Let me emphasize: here is some normal dilation that we don’t know whether it exists or not, and is the fixed tuple with joint eigenvalues at the vertices of the polytope from above. We recall that a matrix valued polynomial is evaluated on a tuple of matrices as follows:

.

So the first method of attack was the following: we randomly sampled a unitary tuple , and then we tried to find a polynomial such that (*) was violated, with . We thought of several ways to look for such a polynomial given , one of which was naively trying to iterate over a mesh of all possible coefficients . As you can easily see this method is so inefficient that even for moderately small and the search could take us a lifetime. Another idea was to try to run some numerical optimization such as gradient descent on the function but since this function is not convex this was also quite futile. And all this just for a given tuple , which might happen to have a dilation.

The second general approach was still to randomly select a tuple of unitaries and to check whether it has a normal dilation, but this time the test was somewhat more indirect. Basically, modulo some equivalences within the theory, we know that has the required dilation of size at most , if and only if there exists a UCP map sending to for , where is the tuple of normals constructed above. This, modulo some more equivalences (and as been noted in this paper of Helton, Klep and McCullough) is equivalent to the existence of positive semidefinite matrices such that

for

where , for , and

.

The existence of such semidefinite matrices can be interpreted as the feasibility of a certain semidefinite program (SDP). In fact, we decided to treat the full semidefinite program as follows

minimize

such that

,

,

.

Note that we moved to the right hand side, to make the equality constraint afiine in the variables and . Recall that and are all fixed. In the implementation we actually defined this as a minimization problem

maximize

such that

,

,

.

Now, there exists available software in Matlab that let’s one solve the above SDP quite reliably, and we used the high level interface CVX which invoked either one of the solvers SDPT3 and SeDuMi (we used both solvers and played with precision parameters to increase our confidence that the results we got were correct). This approach had the great advantage that (besides being much faster), it could tell us what is the smallest such that had a normal dilation such that .

We ran the tests for small values of and . You can see some histograms in the presentation (the value plotted in the histograms in the presentation is , in order to have a direct comparison with the conjecture). Interestingly, we see that with very high probability, the required value of is on average significantly lower than . For and , we found a few random counter examples, but they required that was just 2% over .

Once we know that the average value of is less than , it heuristically becomes reasonable that counter examples are hard to come by, because of concentration of measure phenomena: roughly speaking, the probability of a Lipschitz function on the unitaries (say) to be away from the mean goes down exponentially like with the dimension. For the same reason, once we found a counter example , it is very hard to find coefficients of a matrix valued polynomial such that . And indeed, we did not yet verify by an independent method that our counter examples are indeed counter examples.

The counter examples we found are very unlikely to be caused by numerical error, since we tested the result with a couple of solvers and also the advertised precision of the solvers is several orders of magnitude less than 2%.

After we found the random counter examples, it occurred to us that there was no reason to sample the unitaries independently. We recalled that in the selfadjoint case, tightness of the constant was established using anti-commuting unitaries. Indeed, since counter examples are rare, one would think that the matrices would have to conspire somehow in order to mess up the inequality. So we searched for things that are anti-commuting-like. And it did indeed turn out that the commuting matrices

where are also a counter example (in the case ). We also still haven’t found a polynomial for which . We will probably continue looking for one when the holiday is over, and then I will update.

Here are Mattya and Yuval’s slides which they presented in the talk they gave at the end of the week. I also plan to put the code and files with raw results online on my homepage at some point.

The main method for checking what is the “inflation constant” required for a dilation, using a semidefinite program, is based on basic operator space theory, and in particular draws upon the algorithm described in this paper of Helton, Klep and McCullough.

We used Matlab. The numerical heavy lifting was done by others. We solved the semidefinite program using CVX – a high level Matlab software package for specifying and solving convex programs. We also used YALMIP – another high level Matlab software package for specifying and solving convex programs – to verify the results we obtained with CVX. Both CVX and YALMIP invoked SDP solvers SDPT3 and SeDuMi.

This project came after several years of collaboration with colleagues, and in particular, I had many conversations on the subject with Ben Passer before and during the projects week.

I owe many thanks to the organizers of this projects week, Ram Band, Tali Pinsky, and Ron Rosenthal. Thanks to this opportunity I explored an avenue that I never walked through before.

]]>

I have been in contact with the students in the last few weeks and we decided to concentrate on “the matrix cube problem”. On Sunday, when the week begins, I will need to present the background to the project to all participants of this week, and I have **seven minutes (!!)** for this. As everybody knows, the shorter the presentation, the harder the task is, and the more preparation and thought it requires. So I will take use this blog to practice my little talk.

This project is in the theory of operator spaces. My purpose is to give you some kind of flavour of what the theory is about, and what we will do this week to contribute to our understanding of this theory.

Let be an -dimensional Hilbert space (this just means: an -dimensional inner product space over the complex numbers). Recall that is also a normed space with norm . A basic fact is that every such is * isometrically isomorphic* to the space equipped with with standard inner product

,

which induces the Euclidean norm . This means that there exists a linear isomorphism that preserves the inner product, and in particular, the norm : .

Take two linearly independent vectors , and construct the subspace . **Fact: **no matter how we choose and , is always a -dimensional Hilbert space, i.e., is **isometrically isomorphic** to with the Euclidean norm.

Now let be the space of linear operators on . This space is also normed space, when we give it the norm

.

Using the isometric isomorphism , we will identify with the space of matrices.

Take two linearly independent operators , and construct the subspace . As a linear space, is isomorphic to . However, as a **normed** space might be any one of an uncountable family of two dimensional normed spaces. For example, can be isometrically isomorphic to or to . On the other hand, if we are assuming that is finite dimensional, then is cannot isometrically isomorphic to ! (If we allow for infinite dimensional , then we can get any two dimensional normed space as the span of two operators).

Understanding the normed space boils down to computing the norm:

for every .

It is remarkable how such a simple-minded problem is actually very difficult. What mathematicians can do in difficult situations is try to do one of the following:

**Experiment with examples.**I cannot overstate how much it is important for the health of one’s research that examples be sought and examined. Since calculations regarding the norm of matrices of moderate size require incredibly tedious calculations, it becomes at some point obvious that we should recruit the computer to help us explore what is going on.**Solve the problem for an interesting special case**. For example, suppose that are operators and are**normal**and**commuting**. Then we know that there exists an orthonormal basis in which and . Then calculation of any polynomial in is easy, and in particular .**Reduce the problem to a special case.**For example, the easiest case is when and are scalars, i.e., . The case of two commuting normal operators reduces to the case of scalars ones, because in this case decomposes as the direct sum .

We are led to the question: can we learn something about the case of general operators by using the fact that the problem is solved for commuting normal ones?

Now, a general pair of operators cannot be decomposed into some kind of “sum” of normal commuting pairs. However, we do have the following theorem.

**Theorem.** There exists a constant such that for any two matrices , identified with two operators on , there exists two commuting normal operators such that

**(*)**

and

.

The equality **(*)** gives , so we can get a bound on if we have a reasonable bound on .

We are therefore led to the question: **what is the best possible value of ?**

I have worked on this problem with collaborators, and we have partial results. The optimal constant eludes us, and we are stuck, and we are not sure what the constant should be. What to do? We go back to option no. 1: experiment with examples.

Ok. That is clearly more than seven minutes. To force myself to adhere to the seven minute limit, I made slides (there are four real slides there, according to the rule a slide for every two minutes).

]]>

I returned to the phone conversation. My mother was on the line, and she asked me to talk to my father, who had brain surgery scheduled for tomorrow, and persuade him to get dressed and go to the hospital with her. He accepted my authority. “OK father”, he said.

And then the dean came in. Bla bla bla, some committee of important people something, bla bla, my tenure and promotion to the rank associate professor were approved. Congratulations! “Thank you.”

Later that day I drove to the hospital to see my parents. “Hey, good news – I got tenure.” My father was moved to tears. He thought that tenure was a really big deal. He was right. It is a really big deal. On the next morning he went into surgery, from which he never woke up.

And so the last important thing that I told my father was that I got tenure, and that made him very happy. But I wish that it could have been something else.

]]>

“The first rule of style”, writes Polya, “is to have something to say”.

“The second rule of style is to control yourself when, by chance, you have two things to say; say first one, then the other, not both at the same time”.

Polya’s third rule of style is: “Don’t say what does not need to be said” or maybe “Don’t say the obvious”. I am not sure of the exact formulation, because Polya doesn’t write the third rule down – that would be a violation of the rule!

Polya’s three rules are excellent and one is advised to follow them if one strives for *good style *when writing mathematics. However, style is not the only criterion by which we measure mathematical writing. There is a tradeoff between succinct and elegant style, on the one hand, and clarity and precision, on the other.

“Don’t say the obvious” – sure! But what is obvious? And to whom? A careful writer leaving a well placed exercise in a textbook is one thing. An author of a long and technical paper that leaves an exercise to the poor, overworked referee, is something different. And, of course, a mathematician leaving cryptic notes to his four-months-older self, is the most annoying of them all.

“Don’t say the obvious” – sure, sure! But is it even true? I think that all the mistakes that I am responsible for publishing have originated by an omission of an “obvious” argument. I won’t speak about actual mistakes made by others, but I do have the feeling that some people have gotten away with not explaining something non-trivial, and were lucky that things turned out to be as their intuition suggested (granted, having the correct intuition is also a non-trivial achievement).

I disagree with Polya’s third rule of style. And you see, to reject it, I had to formulate it. QED.

]]>My first discovery: *Winnipeg is not that bad!* In fact I loved it. Example: here is the view from the window of my room in the university residence:

Not bad, right? A very beautiful sight to wake up to in the morning. (I got the impression, that Winnipeg is nothing to look forward to, from Canadians. People of the world: don’t listen to Canadians when they say something bad about any place that just doesn’t quite live up to the standard of Montreal, Vancouver, or Banff.) Here is what you see if you look from the other side of the building:

The conference was very broad and diverse in subjects, as it brings together people working in Operator Theory as well as in Operator Algebras (and neither of these fields is very well defined or compact). I have mixed feelings about mixed conferences. But since I haven’t really decided what I myself want to be working on when I grow up, I think they work for me.

I was invited to give a series of three talks that I devoted to noncommutative function theory and noncommutative convexity. My second talk was about my joint work with Guy Salomon and Eli Shamovich on the isomorphism problem for algebras of bounded nc functions on nc varieties, which we, incidentally, posted on the arxiv on the day that the conference began. May I invite you to read the introduction to that paper? (if you loike it, also take a look at the previous post).

On this page you can find the schedule, abstract, and the slides of most of the talks, including mine. Some of the best talks were (as it happens so often) whiteboard talks, so you won’t find them there. For example, the beautiful series by Aaron Tikuisis was given like that and now it is gone (George Elliott remarked that a survey of the advances Tikuisis describes would be very desirable, and I agree).

Aaron Tikuisis gave a beautiful series of talks on the rather recent developments in the classification theory of separable-unital-nuclear-simple C*-algebras (henceforth SUNS C*-algebras, the algebra is also assumed infinite dimensional, but let’s make that a standing hypothesis instead of complicating the acronym). I think it is fair to evaluate his series of talks as the most important talk(s) in this conference. In my opinion the work (due to many mathematicians, including himself) that Tikuisis presented can be described as the resolution of the Elliott conjecture; I am sure that some people will disagree with the last statement, including George Elliott himself.

Given a SUNS C*-algebra , one defines it’s * Elliott invariant, *, to be the K-theory of , together with some additional data: the image of the unit of in , the space of traces of , and the pairing between the traces and K-theory. It is clear, once one knows a little K-theory, that if and are isomorphic C*-algebras, then their Elliott invariants are isomorphic, in the sense that is isomorphic to for (in a unit preserving way), and that is affinely homeomorphic with in a way that preserves the pairing with the K-groups. Thus, if two C*-algebras are known to have a different K-group, or a different Elliott invariant, then these C*-algebras are not isomorphic. This observation was used to classify AF algebras and irrational rotation algebras (speaking of which, I cannot help but recommend my friend Claude Schochet’s recent “Notice’s” article on the irrational rotation algebras).

In the 1990s, George Elliott made the conjecture that two SUNS C*-algebras are *-isomorphic if and only if . This conjecture became one of the most important open problems in the theory of operator algebras, and arguably **THE** most important open problem in C*-algebras. Dozens of people worked on it. There were many classes of C*-algebras that were shown to be * classifiable* – meaning that they satisfy the Elliott conjecture – but eventually this conjecture was shown to be false in 2002 by Rordam, who built on earlier work by Villadsen.

Now, what does the community do when a conjecture turns out to be false? There are basically four things to do:

- Work on something else.
- Start classifying “clouds” of C*-algebras, for example, show that crossed products of a certain type are classifiable within this family (i.e. two algebras within a specified class are isomorphic iff their Elliott invariants are), etc.
- Make the class of algebras you are trying to classify smaller, i.e., add assumptions.
- Make the invariant bigger. For example, is not enough, so people used . When that turned out to be not enough, people started looking at traces. So if the current invariant is not enough, maybe add more things, the natural candidate (I am told) being the “Cuntz Semigroup”.

The choice of what to do is a matter of personal taste, point of view, and also ability. George Elliott has made the point that choosing 4 requires one to develop new techniques, whereas choosing 3 is kind of focused around the techniques, making the class of C*-algebras smaller until the currently known techniques can tackle them.

Elliott’s objections notwithstanding, the impression that I got from the lecture series was that most main forces in the field agreed that following the third adaptation above was the way to go. That is, they tried to prove the conjecture for a slightly more restricted class of algebras than SUNS. Over the past 15 years or so (or a bit more), they identified an additional condition – let’s call it Condition Z – that, once added to the standard SUNS assumptions, allows classification.And it’s not that adding the additional assumptions made things really easy, it only made the proof *possible* – still it took first class work to even identify what assumption needs to be added, and more work to prove that with this additional assumptions the conjecture holds. They proved:

**Theorem (lot’s of people): ***If and are infinite dimensional SUNS C*-algebras, which satisfy the Universal Coefficient Theorem and an additional condition Z, then if and only if .*

I consider this as the best possible resolution of the Elliott conjecture possible, given that it is false!

A major part of Aaron’s talks was to explain to us what this additional condition Z is. (What the Universal Coefficient Theorem though, was not explained and, if I understand correctly, it is in fact not known whether this doesn’t follow immediately for such algebras). In fact, there are two conditions that one can take for “condition Z”: (i) Finite nuclear dimension, and (ii) Z-stability. The notion of nuclear dimension corresponds to the regular notion of dimension (of the spectrum) in the commutative case. Z-stability means that the algebra in question absorbs the *Jiang-Su algebra* under tensor products in a very strong sense. Following a very long tradition in talks about the Jiang-Su algebra – Aaron did not define the Jiang-Su algebra. This is not so bad, since he did explain in detail what finite nuclear dimension means, and said that Z-stability and finite nuclear dimension are equivalent for infinite dimensional C*-algebras (this is the *Toms-Winter conjecture*).

What was very nice about Aaron’s series of talks was that he gave von Neumann algebraic analogues of the theorems, conditions, and results, and explained how the C*-algebra people got concrete inspiration from the corresponding results *and proofs* in von Neumann algebras. In particular he showed the parallels to Connes’s theorem that every injective type factor with separable predual is isomorphic to the hyperfinite factor. He made the point that separable predual in the von Neumann algebra world corresponds to separability for C*-algebras, hyperfiniteness corresponds to finite nuclear dimension, and factor corresponds to a simple C*-algebra. He then sketched the lines of the proof of the part of Connes’s theorem that says that injectivity of a factor implies hyper-finiteness of (which by Murray and von Neumann’s work implies that is the hyperfinite factor). After that he repeated a similar sketch for the proof that -stability implies finite nuclear dimension.

This lecture series was very inspiring and I think that the organizers made an excellent choice inviting Tikuisiss to give this lecture series.

Christopher Ramsey gave a short talk on “residually finite dimensional (RFD) operator algebras”. This talk is based on the paper that Chris and Raphael Clouatre recently posted on the arxiv. The authors take the notion of residual finite dimensional, which is quite well studied and understood in the case of C*-algebras, and develop it in the setting of nonselfadjoint operator algebras. It is worth noting that even a finite dimensional nonselfadjoint operator algebra might fail to be representable as a subalgebra of a matrix algebra. So it is worth specifying that an operator algebra is said to be RFD if it can be completely isometrically embedded in a direct sum of matrix algebras (and so it is not immediate that a finite dimensional algebra is RFD, though they prove that it is).

What I want to share here is a neat and simple observation that Chris and Raphael made, which seemed to have been overlooked by the community.

When we study operator algebras, there are several natural relations by which to classify them: completely isometric isomorphism, unitary equivalence, completely bounded isomorphism, and similarity. Clearly, unitary equivalence implies completely isometric isomorphism, and similarity implies completely bounded isomorphism. The converses do not hold. However, in practice, many times (for example in my recent paper with Guy and Eli) operator algebras are shown to be completely boundedly isomorphic by exhibiting a similarity between them. That happens because we are many times interested in the “multiplicity free case”.

[**Added in June 11, following Yemon’s comment:** We say that is * similar* to if there is an invertible such that . Likewise, two maps and are said to be

Raphael and Chris observed that, in fact, completely bounded isomorphism is the same as similarity, modulo completely isometric isomorphisms. To be precise, they proved:

**Theorem (the Clouatre-Ramsey trick): ***If and are completely boundedly isomorphic, then and are both completely isometrically isomorphic to algebras that are similar. *

**Proof: **Suppose that and . Let be a c.b. isomorphism. By Paulsen’s theorem, is similar to a completely contractive isomorphism . So we get that the map

decomposes as a product of a complete isometry and a similarity. Likewise, the completely bounded isomorphism is similar to a complete contraction , and we have that

decomposes as the product of a similarity and a complete isometry. Since the composition of all these maps is , the proof is complete.

]]>Our goal in this post is to give several answers to this question and its generalisations. In order to obtain elegant answers, we work over the complex field (e.g., there are many polynomials, such as , that have no real zeros; the fact that they don’t have real zeros tells us something about these polynomials, but there is no way to “recover” these polynomials from their non-existing zeros). We will write for the algebra of polynomials in one complex variable with complex coefficients, and consider it as a function of the complex variable . We will also write for the algebra of polynomials in (commuting) variables, and think of it – at least initially – as a function of the variable .

Let us begin by recalling that by the Fundamental Theorem of Algebra, every (one variable) polynomial decomposes into a product of linear factors. Thus, if we know the zeros **including their multiplicities **then we can determine the polynomial up to multiplicative factor. Moreover, if we know that the zeros of some polynomials are , then we know that must have the form

(*) ,

where the can, in principle, be any positive integers.

Let us reformulate the above observation in a slightly different language, which generalizes well to the multivariable setting. If is polynomial, we write

Every polynomial generates a principal ideal . Conversely, every ideal in is principal. For an ideal we write

for all .

If , then . Now, if we begin with a polynomial as in (*), and we are given such that , what can we say about ? Well, if we knew that the zeros of have the same multiplicities as those of , then we would know that for some nonzero scalar , and in particular we would know that (and vice versa, of course). However, in general, only implies that , which is usually larger than . Note that if , then , because is clearly equal to the product of and some other polynomial.

Now let us consider the much richer case of polynomials in several commuting variables. For brevity, let us write for the vector variable , and let us write . Since this algebra is **not** a principal ideal domain (that’s an easy exercise), it turns out to be more appropriate to talk about ideals rather than single polynomials. Let us define the zero locus similarly to as above:

for all .

We also introduce the following notation: given , we write

for all .

Note that is always an ideal.

The question now becomes: to what extent can we recover from ? A slightly different but related question is: what is the gap between and ? We know already from the one variable case that we cannot hope to fully recover an ideal from its zero locus, but it turns out that a rather satisfactory solution can be given.

Suppose that is a polynomial which it not necessarily contained in , but that for some (think, for example, of and ). Then since , we also have that , so . So the ideal contains at least all polynomials such that .

**Definition:** Let . The ** radical **of is the ideal

there exists some such that .

(On the left hand side, there are two different commonly used notations for the radical).

**Exercise:** The radical of an ideal is an ideal.

**Theorem (Hilbert’s Nullstellensatz): **For every ,

.

Nullstellensatz means “theorem of zero locus” in German, and we can all agree that this is an apropriate name for this theorem. We shall not prove this theorem; it is usually proved in a first or second graduate course in commutative algebra. It is a beautiful theorem, indeed, but it is not **perfect**. Below we shall obtain a perfect Nullstellensatz, that is one in which the ideal is completely recovered by the zeros, with no need to take a radical. Of course, we will need to change the meaning of “zeros”.

My recent work in operator algebras and noncommutative analysis has led me, together with my collaborators Guy Salomon and Eli Shamovich, to discover another Nullstellensatz (actually, we have a couple of Nullstellensatze, but I’ll tell you only about one). This result has already been known to some algebraists in one form or another – after we proved it, we found that it can be dug out of a paper of Eisenbud and Hochester – but does not seem to be well known. I will write the result and its proof in a language that I (and therefore, hopefully, anyone who’s had some graduate commutative algebra) can understand and appreciate.

Let denote the set of **all** -tuples of matrices. We let be the disjoint union of all -tuples of matrices, where runs from to . That is, we are looking at all -tuples of commuting matrices of all sizes. This set is referred to in some places as “the noncommutative universe”. Elements of can be plugged into polynomials in noncommuting variables, and subsets of are where most of the action in “noncommutative function theory” takes place. We leave that story to be told another day.

Similarly, we let denote the set of all commuting -tuples of matrices. Note that we can consider to be the space , and then is an algebraic variety in given as the joint zero locus of quadratic equations in variables. We let . Now we are looking at all commuting -tuples of commuting matrices of all sizes. This can be considered as the “commutative noncommutative universe”, or the “free commutative universe”. Another way of thinking about , is as the “noncommutative variety” cut out in by the equations (in noncommuting variables)

, .

Points in can be simply plugged in any polynomial , for example, if and , then for , we put

,

where is the identity of the same size as (that is, if , then the correct identity to use is ). In fact, points in can be naturally identified with the space of finite dimensional representations of , by

.

(We shall use the word “representation” to mean a homomorphism of an algebra or ring into for some ).

Now, given an ideal , we can consider its zero set in :

for all .

(We will omit the subscript for brevity.) In the other direction, given a subset , we can define the ideal of functions that vanish on it:

for all .

Tautologically, for every ideal ,

,

because every polynomial in annihilates every tuple on which every polynomial in is zero, right? The beautiful (and maybe surprising) fact is the converse.

We are now ready to state the free commutative Nullstellensatz. The following formulation is taken from Corollary 11.7 from the paper “Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball” by Guy Salomon, Eli Shamovich and myself (which I already advertised in an earlier blog post).

**Theorem (free commutative Nullstellensatz): **For every ,

.

**Proof: **This proof should be accessible to someone who took a graduate course in commutative algebra (but not too long ago!). We shall split it into several steps, including some review of required material. Someone who is fluent in commutative algebra will be able to understand the proof by just reading the headlines of the steps without going into the explanations. Recall that we are using the notation .

**Step I: Changing slightly the point of view:** what we shall prove is the following proposition:

**Proposition:** *Let , and suppose that for every unital representation , *

*. *

*Then . *

Noting that

- Representations of are precisely the representations of that annihilate , and
- Representations of are precisely point evaluations at points , thus
- Representations of are precisely points in ,

we see that if we prove the proposition, we obtain that it means precisely that if then , which the direction of the Nullstellensatz that we need to prove.

Thus our goal is to prove the proposition.

**Step II:** **A refresher on localization. **

We shall require the notion of a localization of a ring. Let be a commutative ring with unit (any ring we shall consider henceforth will be commutative and with unit) and let be a maximal ideal in . Define (the **complement **– not quotient – of in ). The * localization of at *is a ring that is denoted as (or ) that contains “a copy of ” and in which, loosely speaking, all elements of are invertible. Thus, still loosely speaking, the localization is the ring formed from all fractions where and .

More precisely, is the quotient of the set by the equivalence relation

if and only if .

Sometimes the pair is written as , and then multiplication is defined such that addition and multiplication are defined so as to agree with the usual formulas for addition and multiplication for fractions, that is,

,

and

.

We define a map by . Clearly, is the unit of , and is again a commutative ring with unit.

We shall require the following two facts, which can be taken as exercises:

**Fact I:** The localization of at a maximal ideal is a *local ring*, that is, it is a ring with a unique maximal ideal.

**Fact II:** If is such that for every maximal ideal in , then .

As we briefly mentioned in Fact I above, we remind ourselves that a * local ring* is a ring that has a unique maximal ideal. A commutative ring is said to be

We shall also require the following theorem, which is not really an exercise. If is an ideal in a ring , we write for the ideal generated by all elements of the form , where for all .

**Krull’s intersection theorem:** *Let be a commutative Noetherian local ring with identity. If is the maximal ideal in , then *

*.*

Take it on faith for now (or see Wikipedia).

**Step III: A lemma on local algebras. **

Recall that a ring is said to be a –* algebra *(

**Lemma: ***Let be a local -algebra with a maximal ideal , and fix . Suppose that for every homomorphism , *

*. *

*Then . *

**Proof:** First, note that , because the quotient is isomorphic to , so must be mapped to zero under this map. Since is Noetherian, is finitely generated, as is also every power . It follows by induction that for every , the algebra is a finite dimensional vector space. Hence the quotient map can also be considered as a finite dimensional representation, so it annihilates . Thus for all . By Krull’s intersection theorem, .

**Step IV and conclusion: proof of the proposition. **

We now prove the above proposition, which, as explained in Step I, proves the free commutative Nullstellensatz. Let be an ideal in , and let be an element such that for every representation of . We wish to prove that , or equivalently, that . By Fact II above, it suffices to show that for every maximal ideal in .

Now let be any maximal ideal in . By the lemma of Step III (which is applicable, thanks to Fact I), if and only if its image under every representation of is zero. But every representation gives rise to a representation , which, by assumption, annihilates . It follows that for every maximal in , whence (Fact II) , and as required. That concludes the proof.

**Remark:** The proof presented here is from my paper with Guy and Eli. I mentioned above that the theorem follows from the results in a paper of Eisenbud and Hochester. Our proof is simpler then theirs, but they prove more: our result says that if for every -tuple of commuting matrices that annihilate , where in principle one might have to consider all . Eisenbud and Hochester’s result implies that there exists some (depending on of course) such that, if for all of size less than or equal to , then . (If you are asking yourself why we are proving in our paper a weaker result then one that already appears in the literature, let me say that this theorem is a rather peripheral result in our paper, and serves a motivational and contextual purpose, rather than supporting the main line of investigation).

We now treat the Theorem (the free commutative Nullstellensatz) in the case of one variable. This really should be understood by everyone. The short explanation is that matrix zeros of a polynomial determine not only the location of the zeros but also their multiplicity.

Let , and let . So we know that for every square matrix that annihilates (that is, every such that ). Our goal is to understand why this is equivalent to belonging to the ideal generated by . One direction is immediate: if , and , then .

In the other direction, we need to show that if , then is a factor of for all . Everything boils down to understanding how polynomials operate on Jordan blocks. Consider a Jordan block

,

and consider the polynomial . Then one checks readily:

- is invertible if and only if .
- if and only if and .

It follows (assuming the form ) that if and only if for some , and . Since every matrix has a unique canonical Jordan form (up to a permutation of the blocks), we can understand precisely what matrices belong to : it is those matrices whose Jordan blocks have eigenvalues in the set , each of whose sizes are no bigger than .

So, if , then for every Jordan block for which (and vice versa). So letting , we see that must be a factor of , that is, has the form . Since this holds for all , we have that .

**Remark:** Note that the proof also shows that to conclude that , one needs to know only that for all of size less than or equal to for .

The beautiful theorem we proved raises two important questions:

- Why is it interesting (besides the plain reason that it is
**evidently interesting**). What questions does this kind of theorem help to answer? - What does the set of commuting tuples of matrices look like? In order for the above theorem to be “useful” we will need to understand this set well.

I hope to write two posts addressing these issues soon.

**Added April 23: **

**Remark: **I should also mention the following very well known observation, which also explains how evaluation on Jordan blocks can identify the zeros of a polynomial including their multiplicity. If is a Jordan block:

,

and is an analytic function, then

.

This gives another point of view of the free Nullstellensatz in one variable.

]]>Click here to download the journal version of the paper

Of course, if you don’t click by May 26 – don’t panic! We always put our papers on the arXiv, and here is the link to that. Here is the abstract:

**Abstract.** For every convex body , there is a minimal matrix convex set , and a maximal matrix convex set , which have as their ground level. We aim to find the optimal constant such that . For example, if is the unit ball in with the -norm, then we find that

.

This constant is sharp, and it is new for all . Moreover, for some sets we find a minimal set for which . In particular, we obtain that a convex body satisfies only if is a simplex.

These problems relate to dilation theory, convex geometry, operator systems, and completely positive maps. For example, our results show that every -tuple of self-adjoint contractions, can be dilated to a commuting family of self-adjoints, each of norm at most . We also introduce new explicit constructions of these (and other) dilations.

]]>The first time that I met him was in the summer of 2009, in a workshop on multivariable operator theory at the Fields Institute in Toronto. I walked up to him and asked him what he thought of some proposed proof of the invariant subspace problem (let’s say that I don’t remember exactly which one), and he didn’t even want to hear about it! At the time I was still rather fresh and didn’t understand why (I later learned that he has had his fair share of checking failed attempts). After this first encounter I thought for some time that he was a scary person, only to discover slowly through the years that he was actually a very very generous, gentle, and kind person. And he was very sharp, *that *was really scary.

The last time that I met him, it was the spring of 2014, and we were riding a train from Oberwolfach to Frankfurt. I think we were Douglas, Ken Davidson, Brett Wick and I. Davidson, Wick and I were going to Saarbrucken, and Douglas was supposed to be on another train, but he joined us because by that time he was half blind and thought that it was better to travel at least part of the way with friends. (The conductor found him out, but decided to let the old man be). At some point we had to switch trains and we left him and I was worried how can we leave a half blind man to travel alone (he made it home safely). On the train he talked about the corona theorem something, and I was sitting on the edge of my sit trying to keep up. I don’t remember what he said about the corona theorem, but I remember clearly that he told me that I shouldn’t have nausea because it is only psychological (you see, even very smart people occasionally say silly things). He also talked about black jack. That was the last time I saw him.

When I was a postdoc I became obsessed with the Arveson-Douglas conjecture, and I worked on this conjecture on and off for several years (see here, here, here and here for earlier posts of mine mentioning this conjecture). That’s one way I got to know some of Douglas’s later works. Douglas motivated many people to work on this problem, and was also responsible for some of the most recent breakthroughs. Just last semester, in our Operator Theory and Operator Algebras Seminar at the Technion, I gave a couple of lectures on two of his very last papers on this topic, which were written together with his PhD student Yi Wang: “Geometric Arveson-Douglas Conjecture and Holomorphic Extension” and “Geometric Arveson-Douglas Conjecture – Decomposition of Varieties“. These are very difficult papers, written with a rare combination of technical ability and vision.

By the way, I have heard wonderful things about Douglas as a mentor and PhD supervisor. In July 2013 I attended a conference in Shanghai in honour of Douglas’s 75th birthday. At the banquet many of his students and collaborators got up to say some words of thanks and to tell about nice memories. After several have already spoken, the master of ceremony walked up to me with his wireless microphone and announced: “and now, to close this evening, the *last student, Piotr Nowak*!” Perhaps this is a good place to point out that I was not Douglas’s student, nor is my name Piotr Nowak (I think Piotr Nowak also was not a student, but he was a postdoc or at least spent some time at Texas A&M). I took the mic in my hand, but didn’t have the guts to play along, and handed it over to Piotr.

(I wrote above that I was not a student of Douglas, but in some sense I am his mathematical step-grandchild. Douglas’s first PhD student was Paul Muhly, who is mathematically married to Baruch Solel, my PhD supervisor, hence is my mathematical step-father.)

Another completely different work of his that I had the pleasure of studying is his beautiful little textbook “Banach Algebra Techniques in Operator Theory“, which I read cover-to-cover with no particular purpose in mind, just for the joy of it.

I think that perhaps Douglas’s greatest contribution to mathematics is the Brown-Douglas-Fillmore (BDF) theory. The magic ingredient of using sophisticated algebraic and topological intuition and machinery appears in much of Douglas’s work, but in BDF it had wonderful consequences as well as incredible impact. If one wants to get an idea of what this theory is about (and what kind of problems in operator theory motivated it), perhaps the best person to explain is Douglas himself. To this end, I recommend reading the introduction to Douglas’s small book on the subject, “C*-Algebra Extensions and K-Homology” (Annals of Mathematics Studies Number 95).

[**Update, March 17th:** I later checked my records and realised that the way I remembered things is not the way they were! I am leaving the memory as I wrote it, but for the record, that train ride was * not *the last time that I saw Douglas, I suppose that it was simply the most memorable and symbolic goodbye. The last time I met Douglas was in Banff, in 2015. (In my memory, I mixed Oberwolfach 2014 with Banff 2015). If I am not mistaken, he was there with his wife Bunny, and we did not interact much. I met him four other times: in June 2010 at the University of Waterloo, when he received a honorary doctorate, later that summer in Banff, at IWOTA 2012 which took place at Sydney, and at IWOTA 2014, in Amsterdam (which was also after our goodbye on the train). ]

]]>

Another question that continues to puzzle me (and to which I still don’t have a complete answer to) is: *why do I continue to inflict upon myself the tortures of international travel, such as ten hour jet lag or trans-atlantic flights?* More generally, I spent a lot of time wondering: *why do I continue going to conferences? Is it worth it for me? Is it worth the university’s money? Is it worth it for mankind? *

Last week I attended the Joint Mathematics Meeting in San-Diego. It was my first time in such a big conference. I will probably not return to such a conference for a while, since it is not so “cost effective”. I guess that I am a small workshop kind of person.

I spoke in and attended all the talks in the Free Convexity and Free Analysis special session, which was excellent. Here is the abstract and here are the slides of my talk (the slides). I also attended some of the talks in the special sessions on Advances in Operator Algebras, Operators on Function Spaces in One and Several Variables, and another one on Advances in Operator Theory, Operator Algebras, and Operator Semigroups*. *I also attended several plenary talks, which were all quite entertaining.

I am happy to report that the field of free analysis and free convexity is in really good shape! There was a sequence of talks in the first day (Hartz, Passer, Evert and Kriel) by three very young researchers on free convexity that really put me into high spirits! The field is blossoming and the competition is healthy and friendly. But the talk that got me most excited was the talk by Jim Agler, who gave a preliminary report on joint work with John McCarthy and Nicholas Young regarding noncommutative complex manifolds. Now, at first it might seem that nc manifolds will be hard to make sense of, because how can you take direct sums of points in a manifold, etc. Moreover, the only take on the free manifolds that I met before was Voiculescu’s construction of the free projective plane, which I found hard to swallow and kind of ruined my appetite for the subject.

However, it turns out that one can define a noncommutative complex manifold as topological space that carries an atlas of charts where is an open subset of and is a homeomorphism from an nc domain onto , such that given two intersecting charts , the map going from to is an nc biholomorphism. **This definition is so natural and clear that I want to shout! **Agler went on and showed us how one can construct a noncommutative Riemann surface, for example the Riemann surface corresponding to the noncommutative square root function. How can one **not** want to hear more of this? I am looking forward very enthusiastically to see what Agler, McCarthy and Young are up to this time; it looks like a very promising direction to study.

Among the plenary talks that I attended (see here for description), the one given by Avi Wigderson struck me the most. I went to the talk simply for mathematical entertainment (a.k.a. to broaden my horizons), but I was very pleasantly surprised to find completely positive maps and free functions in a talk that was supposed to be about computational complexity. I went to the first two talks but missed the third one because I had an opportunity to have lunch with a friend and collaborator, which in any respect was more important to me than the lecture. The above link (here it is again) contains links to a tutorial and papers related to Wigderson’s talks, and I hope to find time to study that, and at least catch up on what I missed in the third talk.

One more thing: there was one quite eminent operator theorist who is long retired, and came to several of the sessions that I attended. At some point I noticed that after every talk a came up to the speaker and said several words of encouragement or advice. Seeing such a pure expression of kindness and love of humanity was touching and inspiring. Upon later reflection, I noticed that such expressions were happening around me all the time, for example when another “celebrity” in our field arrived and a hugging (!) session began. This memory brings a smile to my face. Well, maybe going to San-Diego was worth it, after all.

**Additional thoughts January 26: **

- The tutorial that you can find in “the above link” seems to cover all of Wigderson’s talk.
- I have had some more thoughts on “big conferences”. The good thing about them is that it gives an opportunity to interact with people people outside one’s own academic bubble, and attend high level talks by prominent mathematicians. The bad thing is that you fly far away, waste tons of grant money, and in the end have only a small time to discuss your research topic with experts. So: to go or not to go? I’ve found a solution! Attend
**local**big conferences. Fly across the world only to meet with special colleagues or participate in focused and effective workshops or conferences on your subject of main interest. (And if they invite you to give a plenary talk at the ICM, then, OK, you should probably go).