## Category: Operator theory

### Minimal and maximal matrix convex sets

The final version of the paper Minimal and maximal matrix convex sets, written by Ben Passer, Baruch Solel and myself, has recently appeared online. The publisher (Elsevier) sent us a link through which the official final version is downloadable, for anyone who clicks on the following link before May 26, 2018. Here is the link for the use of the public:

Of course, if you don’t click by May 26 – don’t panic! We always put our papers on the arXiv, and here is the link to that. Here is the abstract:

Abstract. For every convex body $K \subseteq \mathbb{R}^d$, there is a minimal matrix convex set $\mathcal{W}^{min}(K)$, and a maximal matrix convex set $\mathcal{W}^{max}(K)$, which have $K$ as their ground level. We aim to find the optimal constant $\theta(K)$ such that $\mathcal{W}^{max}(K) \subseteq \theta(K) \cdot \mathcal{W}^{min}(K)$. For example, if $\overline{\mathbb{B}}_{p,d}$ is the unit ball in $\mathbb{R}^d$ with the $p$-norm, then we find that

$\theta(\overline{\mathbb{B}}_{p,d}) = d^{1-|1/p-1/2|}$ .

This constant is sharp, and it is new for all $p \neq 2$. Moreover, for some sets $K$ we find a minimal set $L$ for which $\mathcal{W}^{max}(K) \subseteq \mathcal{W}^{min}(L)$. In particular, we obtain that a convex body $K$ satisfies $\mathcal{W}^{max}(K) = \mathcal{W}^{min}(K)$ only if $K$ is a simplex.

These problems relate to dilation theory, convex geometry, operator systems, and completely positive maps. For example, our results show that every $d$-tuple of self-adjoint contractions, can be dilated to a commuting family of self-adjoints, each of norm at most $\sqrt{d}$. We also introduce new explicit constructions of these (and other) dilations.

### Ronald G. Douglas (1938-2018)

A couple of weeks ago I learned from an American colleague that Ron Douglas passed away. This loss saddens me very much. Ron Douglas was a leader in the Operator Theory community, an inspiring mathematician, a person of the kind that they don’t make like any more.

The first time that I met him was in the summer of 2009, in a workshop on multivariable operator theory at the Fields Institute in Toronto. I walked up to him and asked him what he thought of some proposed proof of the invariant subspace problem (let’s say that I don’t remember exactly which one), and he didn’t even want to hear about it! At the time I was still rather fresh and didn’t understand why (I later learned that he has had his fair share of checking failed attempts). After this first encounter I thought for some time that he was a scary person, only to discover slowly through the years that he was actually a very very generous, gentle, and kind person. And he was very sharp, that was really scary.

The last time that I met him, it was the spring of 2014, and we were riding a train from Oberwolfach to Frankfurt. I think we were Douglas, Ken Davidson, Brett Wick and I. Davidson, Wick and I were going to Saarbrucken, and Douglas was supposed to be on another train, but he joined us because by that time he was half blind and thought that it was better to travel at least part of the way with friends. (The conductor found him out, but decided to let the old man be). At some point we had to switch trains and we left him and I was worried how can we leave a half blind man to travel alone (he made it home safely). On the train he talked about the corona theorem something, and I was sitting on the edge of my sit trying to keep up. I don’t remember what he said about the corona theorem, but I remember clearly that he told me that I shouldn’t have nausea because it is only psychological (you see, even very smart people occasionally say silly things). He also talked about black jack. That was the last time I saw him.

When I was a postdoc I became obsessed with the Arveson-Douglas conjecture, and I worked on this conjecture on and off for several years (see here, here, here and here for earlier posts of mine mentioning this conjecture). That’s one way I got to know some of Douglas’s later works. Douglas motivated many people to work on this problem, and was also responsible for some of the most recent breakthroughs. Just last semester, in our Operator Theory and Operator Algebras Seminar at the Technion, I gave a couple of lectures on two of his very last papers on this topic, which were written together with his PhD student Yi Wang: “Geometric Arveson-Douglas Conjecture and Holomorphic Extension” and “Geometric Arveson-Douglas Conjecture – Decomposition of Varieties“. These are very difficult papers, written with a rare combination of technical ability and vision.

By the way, I have heard wonderful things about Douglas as a mentor and PhD supervisor. In July 2013 I attended a conference in Shanghai in honour of Douglas’s 75th birthday. At the banquet many of his students and collaborators got up to say some words of thanks and to tell about nice memories. After several have already spoken, the master of ceremony walked up to me with his wireless microphone and announced: “and now, to close this evening, the last student, Piotr Nowak!” Perhaps this is a good place to point out that I was not Douglas’s student, nor is my name Piotr Nowak (I think Piotr Nowak also was not a student, but he was a postdoc or at least spent some time at Texas A&M). I took the mic in my hand, but didn’t have the guts to play along, and handed it over to Piotr.

(I wrote above that I was not a student of Douglas, but in some sense I am his mathematical step-grandchild. Douglas’s first PhD student was Paul Muhly, who is mathematically married to Baruch Solel, my PhD supervisor, hence is my mathematical step-father.)

Another completely different work of his that I had the pleasure of studying is his beautiful little textbook “Banach Algebra Techniques in Operator Theory“, which I read cover-to-cover with no particular purpose in mind, just for the joy of it.

I think that perhaps Douglas’s greatest contribution to mathematics is the Brown-Douglas-Fillmore (BDF) theory. The magic ingredient of using sophisticated algebraic and topological intuition and machinery appears in much of Douglas’s work, but in BDF it had wonderful consequences as well as incredible impact. If one wants to get an idea of what this theory is about (and what kind of problems in operator theory motivated it), perhaps the best person to explain is Douglas himself. To this end, I recommend reading the introduction to Douglas’s small book on the subject, “C*-Algebra Extensions and K-Homology” (Annals of Mathematics Studies Number 95).

[Update, March 17th: I later checked my records and realised that the way I remembered things is not the way they were! I am leaving the memory as I wrote it, but for the record, that train ride was not the last time that I saw Douglas, I suppose that it was simply the most memorable and symbolic goodbye. The last time I met Douglas was in Banff, in 2015. (In my memory, I mixed Oberwolfach 2014 with Banff 2015). If I am not mistaken, he was there with his wife Bunny, and we did not interact much. I met him four other times: in June 2010 at the University of Waterloo, when he received a honorary doctorate, later that summer in Banff, at IWOTA 2012 which took place at Sydney, and at IWOTA 2014, in Amsterdam (which was also after our goodbye on the train). ]

### The preface to “A First Course in Functional Analysis”

I am not yet done being excited about my new book, A First Course in Functional Analysis. I will use my blog to advertise my book, one last time. This post is for all the people who might wonder: “why did you think that anybody needs a new book on functional analysis?” Good question! The answer is contained in the preface to the book, which is pasted below the fold.

### Introduction to von Neumann algebras, Lecture 3 (some more generalities, projection constructions, commutative von Neumann algebras)

In this lecture we will describe some projection construction in von Neumann algebras, and we will classify commutative von Neumann algebras.

So far (the first two lectures and in this one), the references I used for preparing these notes are Conway (A Course in Operator Theory) Davidson (C*-algebras by Example), Kadison-Ringrose (Fundamentals of the Theory of Operator Algebras, Vol .I), and the notes on Sorin Popa’s homepage. But since I sometimes insist on putting the pieces together in a different order, the reader should be on the look out for mistakes.

### Introduction to von Neumann algebras, Lecture 2 (Definitions, the double commutant theorem, etc.)

In this second lecture we start a systematic study of von Neumann algebras.

### Introduction to von Neumann algebras, addendum to Lecture 1 (solution of Exercise B: the norm of a selfadjoint operator)

One of the challenges I had in preparing this course, was to find a quick route to the modern theory that is different from the standard modern route, in order to save time and be able to reach significant results and examples in the limited time of a one semester course. A main issue was to avoid the (beautiful, beautiful, beautiful) Gelfand theory of commutative Banach and C*-algebras, and base everything on the spectral theorem for a single selfadjoint operator (which is significantly simpler than the one for normal operators). In the previous lecture, I stated Exercise B, which gave some important properties of the spectrum of a selfadjoint operator. Since my whole treatment is based on this, I felt that for completeness I should give the details.

Spoiler alert: If you are a student in the course and you plan to submit the solution of this exercise, then you shouldn’t read the rest of this post.

### A First Course in Functional Analysis (my book)

She’hechiyanu Ve’kiyemanu!

My book, A First Course in Functional Analysis, to be published with Chapman and Hall/CRC, will soon be out. There is already a cover, check it out on the CRC Press website.

This book is written to accompany an undergraduate course in functional analysis, where the course I had in mind is precisely the course that we give here at the Technion, with the same constraints. Constraint number 1: a course in measure theory is not mandatory in our undergraduate program. So how can one seriously teach functional analysis with significant applications? Well, one can, and I hope that this book proves that one can. I already wrote before, measure theory is not a must. Of course anyone going for a graduate degree in math should study measure theory (and get an A), but I’d like the students to be able to study functional analysis before that (so that they can do a masters degree in operator theory with me).

I believe that the readers will find many other original organizational contributions to the presentation of functional analysis in this book, but I leave them for you to discover. Instructors can request an e-copy for inspection (in the link to the publisher website above), friends and direct students can get a copy from me, and I hope that the rest of the world will recommend this book to their library (or wait for the libgen version).

### Dilations, inclusions of matrix convex sets, and completely positive maps

In part to help myself to prepare for my talk in the upcoming IWOTA, and in part to help myself prepare for getting back to doing research on this subject now that the semester is over, I am going to write a little exposition on my joint paper with Davidson, Dor-On and Solel, Dilations, inclusions of matrix convex sets, and completely positive maps. Here are the slides of my talk.

The research on this paper began as part of a project on the interpolation problem for unital completely positive maps*, but while thinking on the problem we were led to other problems as well. Our work was heavily influenced by works of Helton, Klep, McCullough and Schweighofer (some which I wrote about the the second section of this previous post), but goes beyond. I will try to present our work by a narrative that is somewhat different from the way the story is told in our paper. In my upcoming talk I will concentrate on one aspect that I think is most suitable for a broad audience. One of my coauthors, Adam Dor-On, will give a complimentary talk dealing with some more “operator-algebraic” aspects of our work in the Multivariable Operator Theory special session.

[*The interpolation problem for unital completely positive maps is the problem of finding conditions for the existence of a unital completely positive (UCP) map that sends a given set of operators $A_1, \ldots, A_d$ to another given set $B_1, \ldots, B_d$. See Section 3 below.]

### Preprint update (Stable division and essential normality…)

Shibananda Biswas and I recently uploaded to the arxiv a new version of our paper “Stable division and essential normality: the non-homogeneous and quasi-homogeneous cases“. This is the paper I announced in this previous post, but we had to make some significant changes (thanks to a very good referee) so I think I have to re announce the paper.

I’ve sometimes been part of conversations where we mathematicians share with each other stories of how some paper we wrote was wrongfully (and in some cases, ridiculously) rejected; and then I’ve also been in conversations where we share stories of how we, as referees, recently had to reject some wrong (or ridiculous) paper. But I never had the occasion to take part in a conversation in which authors discuss papers they wrote that have been rightfully rejected. Well, thanks to the fact that I sometimes work on problems related to Arveson’s essential normality conjecture (which is notorious for having caused some embarrassment to betters-than-I), and also because I have become a little too arrogant and not sufficiently careful with my papers, I have recently become the author of a rightfully rejected paper. It is a good paper on a hard problem, I am not saying it is not, and it is (now (hopefully!)) correct, but it was rejected for a good reason. I think it is a story worth telling. Before I tell the story I have to say that both the referee and my collaborator were professional and great, and this whole blunder is probably my fault.

So Shibananda Biswas and I submitted this paper Stable division and essential normality: the non-homogeneous and quasi-homogeneous cases for publication. The referee sent back a report with several good comments, two of which turned out to be serious. The two serious comments concerned what appeared as Theorem 2.4 in the first version of the paper (and it appears as the corrected Theorem 2.4 in the current version, too). The first serious  issue was that in the proof of the main theorem we mixed up between $t$ and $t+1$, and this, naturally, causes trouble (well, I am simplifying. Really we mixed between two Hilbert space norms, parametrised by $t$ and $t+1$). The second issue (which did not seem to be a serious one at first) was that at some point of the proof we claimed that a particular linear operator is bounded since it is known to be bounded on a finite co-dimensional subspace; the referee asked for clarifications regarding this step.

The first issue was serious, but we managed to fix the original proof, roughly by changing $t+1$ back to $t$. There was a price to pay in that the result was slightly weaker, but not in a way that affected the rest of the paper. Happily, we also found a better proof of the result we wanted to prove in the first place, and this appears as Theorem 2.3 in the new and corrected version of the paper.

The second issue did not seem like a big deal. Actually, in the referee’s report this was just one comment among many, some of which were typos and minor things like that, so we did not really give it much attention. A linear operator is bounded on a finite co-dimensional subspace, so it is bounded on the whole space, I don’t have to explain that!

We sent the revision back, and after a while the referee replied that we took care of most things, but we still did not explain the part about the operator-being-bounded-because-it-is-bounded-on-a-finite-co-dimensional-space. The referee suggested that we either remove that part (since we already had the new proof), or we explain it. The referee added, that in either case he suggests to accept the paper.

Well, we could have just removed that part indeed and had the paper accepted, but we are not in the business of getting papers accepted for publication, we are in the business of proving theorems, and we believed that our original proof was interesting in itself since it used some interesting new techniques. We did not want to give up on that proof.

My collaborator wrote a revision with a very careful, detailed and rigorous explanation of how we get boundedness in our particular case, but I was getting angry and I made the big mistake of thinking that I am smarter than the referee. I thought to myself: this is general nonsense! It always holds. So I insisted on sending back a revision in which this step is explained by referring to a general principle that says that an operator which is bounded on a finite co-dimensional subspace is bounded.

OOPS!

That’s not quite exactly precisely true. Well, it depends what you mean by “bounded on a finite co-dimensional subspace”. If you mean that it is bounded on a closed subspace which has a finite dimensional algebraic complement then it is true, but one can think of interpretations of “finite co-dimensional” that make this is wrong: for example, consider an unbounded linear functional: it is bounded on its kernel, which is finite co-dimensional in some sense, but it is not bounded.

The referee, in their third letter, pointed this out, and at this point the editor decided that three strikes and we are out. I think that was a good call. A slap in the face and a lesson learned. I only feel bad for my collaborator, since the revision he prepared originally was OK.

Anyway, in the situation studied in our paper, the linear subspace on which the operator is bounded is a finite co-dimensional ideal in the ring of polynomials. It’s closure has zero intersection with the finite dimensional complement (the proof of this is not very hard, but is indeed non-trivial and makes use of the nature of the spaces in question), and everything is all right.  Having learned our lessons, we explain everything in detail in the current version. I hope that carefully enough.

I think that what caused us most trouble was that I did not understand what the referee did not understand. I assumed (very incorrectly, and perhaps arrogantly) that they did not understand a basic principle of functional analysis; it turned out that the referee did not understand why we are in a situation where we can apply this principle, and with hindsight this was worth explaining in more detail.

### One of the most outrageous open problems in operator/matrix theory is solved!

I want to report on a very exciting development in operator/matrix theory: the von Neumann inequality for $3 \times 3$ matrices has been shown to hold true. I learned this from a recent paper (with the irresistible title) “The von Neumann inequality for $3 \times 3$ matrices“, posted on the arxiv by Greg Knese. In this paper, Knese explains how the solution of this outstanding open problem follows from results in a paper by Lukasz Kosinski, “The three point Nevanlinna-Pick problem in the polydisc” that appeared on the arxiv about a half a year ago. Beautifully, and not surprisingly, the solution of this operator/matrix theoretic problem follows from deep new facts in complex function theory in several variables.

To recall the problem, let us denote $\|A\|$ the operator norm of a matrix $A$, and for every polynomial $p$ in $d$ variables we denote by $\|p\|_\infty$ the supremum norm

$\|p\|_\infty = \sup_{|z_i|\leq 1} |p(z_1, \ldots, z_d)|$.

A matrix $A$ is said to be contractive if $\|A\| \leq 1$.

We say that $d$ commuting contractions $A_1, \ldots, A_d$ satisfy von Neumann’s inequality if

(*)  $\|p(A_1,\ldots, A_d)\| \leq \|p\|_\infty$.

It was known since the 1960s that (*) holds when $d \leq 2$. Moreover, it was known that for $d \geq 3$, there are counter examples, consisting of $d$ contractive $4 \times 4$ matrices that do not satisfy von Neumann’s inequality. On the other hand, it was known that (*) holds for any $d$ if the matrices $A_1, \ldots, A_d$ are of size $2 \times 2$. Thus, the only missing piece of information was whether or not von Neumann’s inequality holds or not for three or more contractive $3 \times 3$ matrices. To stress the point: it was not known whether or not von Neumann’s inequality holds for three three-by-three matrices. The problem in this form has been open for 15 years  – but the problem is much older: in 1974 Kaiser and Varopoulos came up with a $5 \times 5$ counter-example, and since then both the $3 \times 3$  and the $4 \times 4$ cases were open until Holbrook in 2001 found a $4 \times 4$ counter example. You have to agree that this is outrageous, perhaps even ridiculous, I mean, three $3 \times 3$ matrices, come on!

In Knese’s paper this story and the positive solution to the problem is explained very clearly and succinctly, and is recommended reading for any operator theorist. One has to take on faith the paper of Kosinski which, as Knese stresses, is where the major new technical advance has been made (though one should not over-stress this fact, because tying things together, the way Knese has done, requires a deep understanding of this problem and of the various ingredients). To understand Kosinki’s paper would require a greater investment of time, but it appears that the paper has already been accepted for publication, so I am quite confident and happy to see this problem go down.