Noncommutative Analysis

Month: October, 2012

Advanced Analysis, Notes 5: Hilbert spaces (application: Von Neumann’s mean ergodic theorem)

In this lecture we give an application of elementary operators-on-Hilbert-space theory, by proving von Neumann’s mean ergodic theorem. See also this treatment by Terry Tao on his blog.

For today’s lecture we will require the following simple fact which I forgot to mention in the previous one.

Exercise A: Let A, B \in B(H). Then \|AB\| \leq \|A\| \|B\|.

Read the rest of this entry »

Advanced Analysis, Notes 4: Hilbert spaces (bounded operators, Riesz Theorem, adjoint)

Up to this point we studied Hilbert spaces as they sat there and did nothing. But the central subject in the study of Hilbert spaces is the theory of the operators that act on them. Paul Halmos, in his classic paper “Ten Problem in Hilbert Space“, wrote:

Nobody, except topologists, is interested in problems about Hilbert space; the people who work in Hilbert space are interested in problems about operators.

Of course, Halmos was exaggerating; topologists don’t really care much for Hilbert spaces for their own sake, and functional analysts have much more to say about the structure theory of Hilbert space then what we have learned. Nevertheless, this quote is very close to the truth. We proceed to study operators.  Read the rest of this entry »

Advanced Analysis, Notes 3: Hilbert spaces (application: Fourier series)

Consider the cube K := [0,1]^k \subset \mathbb{R}^k. Let f be a function defined on K.  For every n \in \mathbb{Z}^k, the nth Fourier coefficient of f is defined to be

\hat{f}(n) = \int_{K} f(x) e^{-2 \pi i n \cdot x} dx ,

where for n = (n_1, \ldots, n_k) and x = (x_1, \ldots, x_k) \in K we denote n \cdot x = n_1 x_1 + \ldots n_k x_k.  The sum

\sum_{n \in \mathbb{Z}^k} \hat{f}(n) e^{2 \pi i n \cdot x}

is called the Fourier series of f. The basic problem in Fourier analysis is whether one can reconstruct f from its Fourier coefficients, and in particular, under what conditions, and in what sense, does the Fourier series of f converge to f.

One week into the course, we are ready to start applying the structure theory of Hilbert spaces that we developed in the previous two lectures, together with the Stone-Weierstrass Theorem we proved in the introduction, to obtain easily some results in Fourier series.

Read the rest of this entry »

Advanced Analysis, Notes 2: Hilbert spaces (orthogonality, projection, orthonormal bases)

(Quick announcement: all lectures will from now on take place in room 201). 

In the previous lecture, we learned the very basics of Hilbert space theory. In this lecture we shall go one little bit further, and prove the basic structure theorems for Hilbert spaces.

Read the rest of this entry »

Reflections on the New York Journal of Mathematics

As I have just announced in a previous post, Matt Kennedy and I have just published a paper in the New York Journal of Mathematics.

The New York Journal of Mathematics is a nonprofit electronic journal, which posts papers openly online so that anyone can read them without any subscription fee. And of course (funny that this has to be noted) it does not require that authors pay for having their papers published. It exists simply for the benefit of mathematical research and the mathematical community. This is how journals should be. There are others like it: there is the BJMA in which I have published in once. See also the list of free online math journals here.

The NYJM is more than a community project – it is a good general math journal. How do I know? The same way I know that other good journals are good: first, I take a look at the editorial board, and I see that there are distinguished mathematicians among the editors (and most importantly for me, I check that there is an editor who is close to my field so he/she will know what to do with my submission); second, I check to see if mathematicians whom I know and highly respect have published there; third, just to be on the safe side, I can browse the index and see if any famous mathematicians which I have heard of have published there too; fourth, I check to see if the journal is on MathSciNet’s Citation Database Reference List (it is); after that I may or may not decide to submit (and this of course also depends on what my coauthor thinks), and if I submit I also get an impression of how professional, smooth and fast the publishing process is. My impression from my recent experience is that the publishing process in NYJM is as professional, smooth and fast as I could hope for.

Unfortunately, some committees which make decisions regarding tenure and promotion also need to decide if the journals in which candidates publish are good journals. There are several “bibliometrical” tools which help committees and administrators figure out if journals are any good. Here at BGU the tool usually used is something called ISI Web of Knowledge. Now guess what ISIWoK says about NYJM. Seriously, guess: do you think that ISIWoK says that NYJM is a good journal or an OK journal or a bad journal?

HA! Trick question! According to ISIWoK, the New York Journal of Math doesn’t exist. There is no such journal. Now, the NYJM has been coming out since 1994, so somebody at ISIWoK hasn’t been doing a very good job. Or maybe they have?

Well: luckily my university has decided to treat NYJM as a real journal (I am sorry to admit that I probably would not have published there otherwise). Unfortunately, there is still a way to go: my university still uses ISIWoK to count citations, so for this paper of mine there will be no data. I hope that this will change before I am up for promotion.

 

UPDATE February 5, 2013: Mark Steinberger commented below that NYJM is now covered by Thomson-Reuters Web of Science, and that this is retroactive to Volume 16 (2010).

Essential normality and the decomposability of algebraic varieties

I am very proud, because few days ago Matt Kennedy and I have had our new paper, Essential normality and the decomposability of algebraic varieties, published in the New York Journal of Mathematics.

In this paper we treat a strong version of a conjecture of Arveson, which we call the Arveson-Douglas conjecture, and we obtain some new results in particular cases (the conjecture is still far from being settled). I think that we do a good job in the introduction of the paper explaining what this conjecture is and what we do, so I invite you to take a look.

Now I want to say a few words about the New York Journal of Mathematics. I’ll say it in a different post.

Advanced Analysis, Notes 1: Hilbert spaces (basics)

In this lecture and the next few lectures we will study the basic theory of Hilbert spaces. Hilbert spaces are usually studied over \mathbb{R} or over \mathbb{C}. In this course, whenever we consider Hilbert spaces, we shall consider only complex Hilbert spaces, that is, spaces over \mathbb{C}. The are two reasons for this. First, the results in this post hold equally well for real Hilbert spaces with similar proofs. Second, in some topics that we will discuss later the nice results only hold for complex spaces. So we will ignore real Hilbert spaces because they are essentially the same and also because they are fundamentally different!

Remark: The only situation I know where it is really important to concentrate on real Hilbert spaces when doing convex analysis (there must be others that I don’t know of). On the other hand, it is often convenient – indeed, we already did so in this course – to study real Banach spaces.

Read the rest of this entry »

William Arveson

William B. Arveson was born in 1934 and died last year on November 15, 2011. He was my mathematical hero; his written mathematics has influenced me more than anybody else’s. Of course, he has been much more than just my hero, his work has had deep and wide influence on the entire operator theory and operator algebras communities. Let me quickly give an example that everyone can appreciate: Arveson proved what may be considered as the “Hahn-Banach Theorem” appropriate for operator algebras. He did much more than that, and I will expand below on some of his early contributions, but I want to say something before that on what he was to me.

When I was a PhD student I worked in noncommutative dynamics. Briefly, this is the study of actions of (one-parameter) semigroups of *-endomorphisms on von Neumann algebras (in short E-semigroups). The definitive book on this subject is Arveson’s monograph “Noncommutative Dynamics and E-Semigroups”. So, naturally, I would carry this book around with me, and I would read it forwards and backwards. The wonderful thing about this book was that it made me feel as if all my dreams have come true! I mean my dreams about mathematics: as a graduate student you dream of working on something grand, something important, something beautiful, something elegant, brilliant and deep. You want your problem to be a focal point where different ideas, different fields, different techniques, in short, all things, meet.

When reading Arveson there was no doubt in my heart that, e.g., the problem classifying E-semigroups of type I was a grand problem. And I was blown away by the fact that the solution was so beautiful. He introduced product systems with such elegance and completeness that one would think that this subject has been studied for the last 50 years. These product systems were measurable bundles of operator spaces – which turn out to be Hilbert spaces! – that have a group like structure with respect to tensor multiplication. And they turn out to be complete invariants of E-semigroups on B(H). The theory set down used ideas and techniques from Hilbert space theory, operator space theory, C*-algebras, group representation theory, measure theory, functional equations, and many new ideas – what more could you ask for? Well, you could ask that the new theory also contribute to the solution of the original problem.

It turned out that the introduction of product systems immensely advanced the understanding of E-semigroups, and in particular it led to the full classification of type I ones.

So Arveson became my hero because he has made my dreams come true. And more than once: when reading another book by him, or one of his great papers, I always had a very strong feeling: this is what I want to do. And when I felt that I gave a certain problem all I thought I had in me, and decided to move on to a new problem, it happened that he was waiting for me there too.

I wish to bring here below a little piece that I wrote after he passed away, which explains from my point of view what was one of his greatest ideas.

For a (by far) more authoritative and complete review of Arveson’s contributions, see the two recent surveys by Davidson (link) and Izumi (link).

Read the rest of this entry »

Things to stop doing

I know, I know, there are lots of advice pages out there, for example (my favorites) this extremely broad, kind and generous page by Terry Tao or this very focused one by Doron Zeilberger. But like any advice giver who respects himself I am convinced that I would do you, young mathematician, a great wrong if I don’t share what I have learned in my journey so far, and that this is an urgent task. After all, I thought about this yesterday.

So here is my first piece of advice: Stop doing things.

Let me explain. There are some things which you have to, at some point, stop doing, or else you will never have time to make progress.

  1. Stop reading all the preliminaries. Of course, you want to have all the background before you start your next research project. I had a friend who spent the whole two years of his master’s degree reading. There is always another book that really explains something mentioned in the current book that you are reading. There’s an infinite chain of prequels, and no end to it.
  2. Stop trying to understand every single word in every paper. Some papers will not deliver what you want them to, or will not be interesting, and you should try to figure out if this is the case before you crack your head open on Lemma 4.
  3. Stop going to every talk in your department. It’s OK, they were a little surprised to see you there in the first place.
  4. Stop spending two hours to prepare every one hour lecture. It’s unsustainable.
  5. Stop working on the problems from your thesis. It was very good to spend a few years working on the same problem, because that way you were able to obtain some really great results. But even though there are some tiny corners left to clear, it is time to move on to another problem, open your mind, maybe even change your field.
  6. Stop changing your field. It really takes some time to get a grasp of a research area, it takes time to make some real impact in some field, it takes time to get to know everybody and to get everybody to know you. It is a shame to throw this away too fast because another problem looks sexy.
  7. Stop trying hard to keep up with what is going on in your field on a daily basis, (like reading every paper that comes on the arXiv). That’s what conferences are for.
  8. Stop asking everyone about your research questions. They never know the answer. It is really for you to discover.
  9. Stop trying to collaborate with your grad school buddies. Though many collaborators become good friends, not all good friends become good collaborators.
  10. Stop reading advice that other mathematicians give. At some point you have to start writing these.

Now, I hope you understand what I mean. It is crucial (listen carefully graduate students, it is crucial) that you actually stop doing these things (thus you have to start doing them at some earlier point), if you simply never do these things to begin with then I think that you are on the wrong track.

Perhaps after some time goes by you have to start doing these things again. That does not contradict the fact that sometimes you just have to stop.

Functional Analysis – Introduction. Part II

In a previous post we discussed some of the history of functional analysis and we also said some vague things about its role in mathematics. In this second part of the introduction we will see an example of the spirit of functional analysis in action, by taking a close look at the Stone-Weierstrass approximation theorem.

Read the rest of this entry »