## Category: Publishing

### The preface to “A First Course in Functional Analysis”

I am not yet done being excited about my new book, A First Course in Functional Analysis. I will use my blog to advertise my book, one last time. This post is for all the people who might wonder: “why did you think that anybody needs a new book on functional analysis?” Good question! The answer is contained in the preface to the book, which is pasted below the fold.

### Our new baby book

Finally, after a long delay, a package arrived containing some hard copies of my book.

### A First Course in Functional Analysis (my book)

She’hechiyanu Ve’kiyemanu!

My book, A First Course in Functional Analysis, to be published with Chapman and Hall/CRC, will soon be out. There is already a cover, check it out on the CRC Press website.

This book is written to accompany an undergraduate course in functional analysis, where the course I had in mind is precisely the course that we give here at the Technion, with the same constraints. Constraint number 1: a course in measure theory is not mandatory in our undergraduate program. So how can one seriously teach functional analysis with significant applications? Well, one can, and I hope that this book proves that one can. I already wrote before, measure theory is not a must. Of course anyone going for a graduate degree in math should study measure theory (and get an A), but I’d like the students to be able to study functional analysis before that (so that they can do a masters degree in operator theory with me).

I believe that the readers will find many other original organizational contributions to the presentation of functional analysis in this book, but I leave them for you to discover. Instructors can request an e-copy for inspection (in the link to the publisher website above), friends and direct students can get a copy from me, and I hope that the rest of the world will recommend this book to their library (or wait for the libgen version).

### New journal: Advances in Operator Theory

I am writing to let you know about a new journal: Advances in Operator Theory.

This is good news! There is certainly room for another very good journal in operator theory. Naturally, this journal will be open access, and, obviously, there will be no author fees (page charges, or whatever you want to call that). So this is just the kind of journal we need, granted that it will be able to maintain a high standard and slowly build its reputation.

The first step in establishing a reputation is achieved: AOT has a respectable editorial board, with several distinguished members.

The founding editor-in-chief is Mohammad Sal Moslehian, who has been making efforts on the open access front at least since he launched the Banach Journal of Mathematical Analysis, roughly ten years ago. The BJMA is a good example of an electronic journal that started from scratch, and slowly worked its way to recognition (e.g., is now indexed by MathSciNet, etc.). I hope AOT follows suit, and hopefully will do even better; I believe it should aim to be at the level of Journal of Operator Theory, so that it can relieve JOT of a part of the load.

(Too bad that the acronym AOT, when spelled out, sounds very much like JOT. This will certainly lead to some confusion…)

### Thirty one years later: a counterattack on Halmos’s critique of non-standard analysis

As if to celebrate in an original way the fifty year anniversary of Bernstein and Robinson’s solution to (a generalization of) the Smith-Halmos conjecture (briefly, that if $T$ is an operator such that $p(T)$ is compact for some polynomial $p$, then $T$ has an invariant subspace), several notable mathematicians posted a interesting and very nonstandard (as they say) paper on the arxiv.

This paper briefly tells the story regarding the publication of this paper, in which Bernstein and Robinson use Robinson’s new theory of non-standard analysis (NSA) to prove the above mentioned conjecture in operator theory. This was one of the first major successes of NSA, and perhaps one would think that all of the operator theory community should have accepted the achievement with nothing but high praise. Instead, it was received somewhat coldly: Halmos went to work immediately to translate the NSA proof and published a paper proving the same result, with a proof in “standard” operator theoretic terms. (See the paper, I am leaving out the juicy parts). And then, since 1966 until 2000 (more or less), Halmos has been apparently at “war” with NSA (in the paper the word “battle” is used), and has also had criticism of logic; for example, it is implied in his book that he did not always consider logic to be a part of mathematics, worse, it seems that he has not always considered logicians to be mathematicians. (When I wrote about Halmos’s book a few months ago, I wrote that I do not agree with all the opinions expressed in the book, and I remember having the issue with logic and logicians in my mind when writing that).

In the paper that appeared on the arxiv today, the authors take revenge on Halmos. Besides a (convincing) rebuttal of Halmos’s criticisms, the seven authors hand Halmos at least seven blows, not all of them below the belt. The excellent and somewhat cruel title says it all: A non-standard analysis of a cultural icon: the case of Paul Halmos.

Besides some feeling of uneasiness in seeing a corpse being metaphorically stabbed (where have you been in the last thirty years?), the paper raises interesting issues (without wallowing too much on either one), and may serve as a lesson to all of us. There is nothing in this story special to operator theory versus model theory, or NSA, or logic. The real story here is the suspicion and snubbish-ness of mathematicians towards fields in which they do not work, and towards people working in these fields.

I see it all the time. Don’t kid me: you have also seen quite a lot of it. It is possible, I confess, that I have exercised myself a small measure of suspicion and contempt to things that I don’t understand. As the authors of the paper hint, these things are worse than wrong – they might actually hurt people.

Anyway, many times people who are ignorantly snobbish to other fields end up looking like idiots. Stop doing that, or thirty years from now a mob of experts will come and tear you to shreds.

P.S. – It seems that the question of who was the referee of the Bernstein-Robinson paper is not settled, though some suspect it was Halmos. Well, if someone could get their hands on the (anonymous!) referee report (maybe Bernstein or Robinson kept the letter?), I am quite sure that if it was Halmos, it would be clear. In other words, if Bernstein or Robinson suspected that it was him on account of the style, then I bet it was.

P.P.S. – regarding the theorem starting this discussion the quickest way to understand it is via Lomonosov’s theorem. The invariant subspace theorem proved by Bernstein and Robinson (polynomially compact operator has an invariant subspace) is now superseded by Lomonosov’s theorem (google it for a simple proof), which says that every bounded operator on a Banach space that commutes with a nonzero compact operator has a non-trivial invariant subspace.

### Revising and resubmitting my opinions on refereeing

With time, with age, having done already quite a few paper-refereeing jobs, I have come to change some of my opinions on refereeing.

Anonymous refereeing. I used to think that anonymous refereeing was not important. Why can’t I (as referee) just write back to the authors and discuss the weak points of the paper with them. Wouldn’t that be much better and faster? Besides, if I have a certain opinion about a paper, I should be willing to back it with my name, publicly.

Yes, I was innocent and was not yet aware of the endless ways in which some people will try to get back at you if your report includes anything but praise and/or typo corrections. But besides the usual reasons for or against anonymous refereeing, here is something I overlooked.

The really nice thing about anonymous refereeing is this: not only does it free the referee to say bad things, it also frees the referee to say good things. There was a paper I was reviewing for a good journal, and I really wanted the paper to get accepted. I thought it was very good, and that it should be accepted to this good journal. I wanted to be very clear that this paper should be accepted (sometimes, a lukewarm report is not enough to get a paper accepted), and being anonymous made it easier for me to use superlatives that I rarely feel comfortable using in front of someones face. The fact that I was anonymous, and the fact that the editor knows that I am anonymous, also makes it easier for the editors to take my praise seriously.

Therefore, the identity of referees should be kept secret, so we can all be kind to each other. (And please don’t ever ask me if it was me who refereed your paper).

Is the paper interesting? When I first heard that papers get rejected because they are “not interesting”, I was a little surprised. “Interesting” is not an objective criterion. It might be interesting to one person, and not interesting to another. Certainly the author thinks it is interesting!

Certainly? Well, I have seen some papers, unfortunately, about which I cannot say that I am certain that even the author thought that they are interesting. I have seen some papers that were written only because they could be written. Nobody ever wrote that particular proof to that particular proposition, with this set of assumptions, so this is a “new contribution”. But sometimes, a paper contains nothing which has appeared before, but does not really contain anything new. If there is nothing new, then it is boring, not interesting.

It is very hard to say what makes a good paper, and what makes a bad one. What makes good scientific research? I believe that judging the value of scientific research is not a scientific activity in itself. Deciding whether a mathematical paper is good is a job for mathematicians, but it is not a mathematical problem.

So when I evaluate a paper, I check if it is correct and new, of course, but I also cannot help but thinking whether or not it is interesting. What does interesting mean? It means interesting to me, of course! But that’s OK, because if the editor asked for my opinion, then it is my opinion that I am going to give.

Do I work for the journal. The editors of Journal A say that they want to publish only the best research articles. What does that mean? How can I compare? Let me tell you if the paper is new, correct, and interesting. What do I care that Journal A wants to remain prestigious? In fact, I never published in Journal A, and as far as I care its reputation can go to hell.

And really, to be honest, there are many factors that may affect my decision to recommend acceptance of  a paper to Journal A: 1) The authors are young researchers and this could help them in their career. 2) The paper is in my field, and I want to use the reputation of Journal A to increase the prestige of my field. 3) etc., etc., one can think of all kinds of impure reasons to be consciously biased for accepting a paper. In any case, if the paper is in my opinion a good, solid contribution, then why is it my business that the editor wants only to publish spectacular papers?

I now look at it differently. It is an honour to be approached by Journal A and be asked for their opinion. The editor is asking my professional opinion, based on my reputation. I should keep in mind that my answer, among other things, affects my reputation. I have to behave like a professional, and answer the question asked. Of course, I still don’t work for the journal, and I am free to be very enthusiastic about papers that are important in my opinion.

More on the pecking order. I have heard more than once of the following scenario: an editor of Journal B tells a referee that his journal (Journal B) is now only accepting papers that would be good enough for Journal A.

Excuse me!? If the authors thought their paper was good enough for Journal A, then they would submit it to Journal A, and not to B! And anyway, I don’t work for the journal! Clearly the journal has its goals, it wants to increase its prestige (or whatever), but I also have my own priorities, and in any case I don’t care about the prestige of Journal B. I’m already being very nice that I am willing to referee this paper for free, so don’t ask me to work for your prestige (if it is good enough for me to referee, then it’s good enough for you to publish).

Actually, the idea that Journal B aims to be at the “quality” of Journal A (whatever that means) is not so ridiculous. Journal A rejects most of the papers submitted to it, in fact it rejects some excellent papers. Where are all these papers supposed to go? So I don’t mind answering the question asked. (What I once wanted to write, but did not, is this: “Yes, I would recommend it for Journal A, and in fact this paper is too good for you, Journal B! I recommend rejecting the paper on the grounds that it is too good for this journal…”)

Submitting my review in a timely manner. I have not changed my mind about that. I always give an estimate of when I will submit my report, and I always submit on (or before) time. This means that I have to say “no” to a large fraction of referee requests (I try to referee at least about as many papers as I publish every year), otherwise I would not be able to do it in a timely manner. Naturally, I try to accept for review the papers that are more interesting.

### Something sweet for the new year

Tim Gowers recently announced the start of a new journal, “Discrete Analysis”. The sweet thing about this journal is that it is an arxiv overlay journal, meaning that the journal will act like most other elctronic journals with the difference that all it does in the end (after standard peer review and editorial decisions) is put up a link on its website to a certain version of the preprint on the arxiv. The costs are so low, that neither readers nor authors are supposed to pay. In the beginning, Cambridge University will cover the costs of this particular journal, and there are hopes that funding will be found later (of course, arxiv has to be funded as well, but this does not seem to incur additional costs on arxiv). The journal uses a platform called Scholastica (which does charge something, but relatively low – like \$10 per paper) so they did not have to set up their webpage and deal with that kind of stuff.

The idea has been around for several years and there are several other platforms (some of which do not charge anything since they are publicly funded) for carrying journals like this: Episciences, Open Journals. It seems like analysis, and operator theory in particular, are a little behind in these initiatives (correct me if I am wrong). But I am not worried, this is a matter of time.

The news of the baby journal made me especially happy since leaders like Gowers and Tao have been previously involved with the creation of the bad-idea-author-pay-journals Forum of Mathematics (Pi and Sigma), and it is great that their stature is also harnessed for a decent journal (which also happens to have a a nice and reasonable name).

### A corrigendum

Matt Kennedy and I have recently written a corrigendum to our paper “Essential normality, essential norms and hyperrigidity“. Here is a link to the corrigendum. Below I briefly explain the gap that this corrigendum fills.

A corrigendum is correction to an already published paper. It is clear why such a mechanism exists: we want the papers we read to represent true facts, so false claims, as well as invalid proofs or subtle gaps should be pointed out to the community. Now, many many papers (I don’t want to say “most”) have some kind of mistake in them, but not every mistake deserves a corrigendum – for example there are mistakes that the reader will easily spot and fix, or some where the reader may not spot the mistake, but the fix is simple enough.

There are no rules as to what kind of errors require a corrigendum. This depends, among other things, on the authors. Some mistakes are corrected by other papers. I believe that very quickly some sort of mechanism – say google scholar, or mathscinet – will be able to tell if the paper you are looking up is referenced by another paper pointing out a gap, so such a correction-in-another-paper may sometimes serve as legitimate replacement for a corrigendum, when the issue is a gap or minor mistake.

There is also a question of why publish a corrigendum at all, instead of updating the version of the paper on the arxiv (and this is exactly what the moderators of the arxiv told us at first when we tried to upload our corrigendum there. In the end we convinced them that the corrigendum can stand by itself). I think that once a paper is published, it could be confusing to have a version more advanced than the published version; it becomes very clumsy to cite papers like that.

The paper I am writing about (see this post to see what its about) had a very annoying gap: we justified a certain step by citing a particular proposition from a monograph. The annoying part is that the proposition we cite does not exactly deal with the situation we deal with in the paper, but our idea was that the same proof works in our situation. We did not want to spell out the details because we considered that to be very easy, and in any case it was not a new argument. Unfortunately, the same proof does work when working with homogeneous ideals (which was what first versions of the paper treated) but in fact it is not clear if they work for non-homogeneous ideals. The reason why this gap is so annoying, is that it leads the reader to waste time in a wild goose chase: first the reader goes and finds the monograph we cite, looks up the result (has to read also a few extra pages to see he understands the setting and notation in the monograph), realises this is is not the same situation, then tries to adapt the method but fails. A waste of time!

Another problem that we had in our paper is that one requires our ideals to be “sufficiently non-trivial”. If this were the only problem we would perhaps not bother writing a corrigendum just to introduce a non-triviality assumption, since any serious reader will see that we require this.

If I try to take a lesson from this, besides a general “be careful”, it is that it is dangerous to change the scope of the paper (for us – moving form homogeneous to non-homogeous ideals) in late stages of the preparation of the paper. Indeed we checked that all the arguments work for the non-homogneous case, but we missed the fact that an omitted argument did not work.

Our new corrigendum is detailed and explains the mathematical problem and its solutions well, anyone seriously interested in our paper should look at it. The bottom line is this as follows.

Our paper has two main results regarding quotients of the Drury-Arveson module by a polynomial ideal. The first is that the essential norm in the non selfadjoint algebra associated to a the quotient module, as well as the C*-envelope, are as the Arveson conjecture predicts (Section 3 in the paper) . The second is that essential normality is equivalent to hyperrigidity (Section 4 in the paper).

Under the assumption that all our ideals are sufficiently non-trivial (and some other standing assumptions stated in the paper), the situation is as follows.

The first result holds true as stated.

For the second result, we have that hyperrigidity implies essential normality (as we stated), but the implication “essential normality implies hyperrigidity” is obtained for homogeneous ideals only.

### Interesting figure

I found an interesting figure in the March 2014 issue of the EMS newsletter, from the article by H. Mihaljevic´ -Brandt and O. Teschke, Journal Profiles and Beyond: What Makes a Mathematics Journal “General”?

See the right column on page 56 in this link. (God help me, I have no idea how to embed that figure in the post. Anyway, maybe it is illegal, so I don’t bother learning.) One can see the “subject bias” of Acta, Annals and Inventiones.

On the left column, there is a graph showing the percentage of papers devoted to different MSC subjects in what the authors call “generalist” math journals (note carefully that these journals are only a small subclass of all journals, chosen by a method that is loosely described in the article). On the right column there is the interesting figure, showing the subject bias. If I understand correctly, the Y-axis is the MSC number and the X-axis represents the corresponding deviation from the average percentage given in the left figure. So, for example, Operator Theory (MSC 47) is the subject of about 5 percent of the papers in a generalist journal, but in the Annals there is a deviation of minus 4 from the average, so if I understand this figure correctly, that means that about 1 percent of papers in the Annals are classified under MSC 47. Another example: Algebraic Geometry (MSC 14), takes up a significant portion of Inventiones papers, much more than it does in an average “generalist” journal.

(I am not making any claims, this could mean a lot of things and it could mean nothing. But it is definitely interesting to note.)

Another interesting point is that the authors say that of the above three super-journals, Acta “is closest to the average distribution, though it is sometimes considered as a journal with a focus on analysis”. That’s interesting in several ways.