Re: Your mc efforts is a great job !!! [Warning]: Long

On Mon, 10 Nov 2003, Nikolai Bezroukov wrote:

> I expressed algitimate (I suppose) opinion that mcserv can be a perfect
> "fileserver+shell server" conbination.  You already voiced your opinion
> before that mcserv is redundant and now reiterated your points again.
> I disagree, but I suppose that understood you correctly.  And please do
> not overuse this hammer "show me the code" ;-). [snip]

OK, I just was tired of all this discussion.  I know that it won't happen
because I've seen the code and I know that sftp support would be much more
welcome than making mcfs effective.

> Idiot admin with ssh will always have a less secure system that a
> competent admin with telnet and ftp and my point is that mc is for
> intelligent admins ;-)
> In no way encryption on the application level like in ssh is panacea.
> On the transport level its another story (Ipsec). FYI Open SSH 1 was one
> of the major sources of "real" break-ins for ISPs.  Do not know stats
> for 'real' breakins for SSH2 but the latest CERT advisories (CA-2002-36.
> etc) does not encorage blind trust and suggest that there still risks
> due to the complexity of the codebase.

I fully agree with this.  I just wanted to say that systems that provide
security (or let's say do their part) are more likely to be popular in the
real world, where IPsec is not used universally.  There is no way a
non-secure protocol can replace a secure one in the current climate.  If
ssh and sftp lack something, it will be easier to change them than to
create an alternative without security.

> >> > I was confused by "VFS" meaning two different things on adjacent
> >> > lines. I still don't know what NC5 XTree VFS is. I have never used NC5.
> >>
> >>That's how I call "flat file VFS". As a PhD I have the responsibility to
> >>obscure things by inventing new terms ;-). See also
> >>  The book discuses several
> >>implementations including NC5.
> >
> >Actually, this should be implemented separately from VFS, as a higher
> >level.  It's representations of the tree on the panels, not
> >representations of something as a tree.
> >
> You might be wrong, as semantically this is yet another VFS, not that
> different from tar VFS or gzip  VFS.

Only if VFS classes (such as tarfs) were allowed to choose whether to
report full filenames in one directory or not.  I prefer to have separate
layers.  There is no reason for the user (i.e. somebody without knowledge
of the tar format) to expect different behavior from the normal
filesystem, where the files are shown in a tree.  Conversely, if the user
prefers the flat representation, it should be used regardless of the
origin of the file entries.

There is nothing virtual in a different representation of the same tree.
Anyway, it's an academic question and I don't really care.

> >>Two things are now more or less clear:
> >>
> >>    -  mc is in the stabilization phase and it's level of maturity is
> >>    reasonably high,
> >>
> >>
> >
> >This doesn't apply to VFS, although I hope to fix it.  Support for
> >background operations is a worse problem - it's useful, but I cannot be
> >implemented correctly without major changes.
> >
> >I would say the code is in the "active stabilization" mode.  Sometimes the
> >only way to fix the code is to change it until it's clear what it does.
> >I'm not kidding, unfortunately.
> >
> >
> That's a common situation not limited to mc.  You might benefit from
> Doxygen <>

I actually tried it already.  I believe doesn't address problems with the
current code, but it may be useful in the future.  The problems with
the existing code are different.  I'm not concerned where functions are
called.  I can always find it.

Most often the problems are of different kind:

1) Code duplication.  Examples - two parallel dialog stacks in dlg.c and
dialog.c (killed recently).  Similar but slightly different sequences for
Ctrl-O and Enter on the command line (still needs work).  A lot of
duplicate code in getid() methods in VFS classes.

2) Unused or useless code.  That's very mc specific because of its
history.  Mostly removed.  Example - file caching for tarfs and cpiofs.

3) Code that has never been finished, without documentation what it should
do.  Sometimes the code is very elaborate, but there is no one to
understand an finish it.  Sometimes it's OK to remove (PC port), sometimes
not (directory tree).  Sometimes the code mostly works, but fails in
complex situations (VFS garbage collection).

4) Saving on little details while losing on the average.  Example -
retrieving 8k of each file on VFS when the user clicks on it.  That's the
minimum required to find the type of the file.  In fact, the whole file
wad retrieved anyway, in some situations twice.  In most cases, the user
would still need the whole file and not the first 8k.

5) Misplaced features.  Example - hex edit in the viewer.  Very hard to do
anything about it.

6) Simple hacks covered by layers of code to work around the defects.
Example - functions to capture stderr so that popen() can be used and the
child's error messages could be shown.

7) Unneeded complexity as a result of initial bad design.  Example -
background-safe message stub functions with fixed number of arguments.
Nobody thought that the arguments could be combined into one string in the
child process, rather than passed individually and combined in the parent.

8) Support for weird and obsolete software that is even hard to set up for
testing.  Example - BSD curses, termnet, SCO console.  Mostly removed.

> >Please note that we don't currently include S-Lang interpreter, only
> >S-Lang screen library.  Also, it's possible to compile mc with ncurses
> >without S-Lang.  I believe ncurses support is worth keeping for now.
> >
> That complicates things. If the current goal is simplicity, that should
> be weighted against the amount of code that need to be maintained when
> you support both, and the amount when you need to support just S-lang.

I don't think keeping ncurses is very important, although it may be useful
for rare OSes and terminals.  It's not hard to maintain ncurses support.
It may be harder to fix or work around S-Lang limitations if the mc users
are forced to use it and start complaining.

One reason why ncurses support is important is UTF-8 support.  S-Lang is
patched for UTF-8 by distributors, ncurses supports it natively.  We need
UTF-8 support.  It may happen that it will be easier to start with

> >I'd like to see how you would handle something like the Open action for
> >HTML (search for "html" in - that action indeed has grown
> >too large.
> >
> That's a drawback of any scripting approach -- it can be too slow. So be
> it. Nothing is perfect and scripting is not a panacea for all
> situations.

I just wanted to see the code.  If it searches for ten browsers in PATH
every time, I don't care.  If it's slow, we can fix it.  But if it's ugly
and cannot be easily extended of factorized, then I'd rather use another

> >>It is true that S-lang has its own warts and limitations, but please
> >>understand that is such situation there is no best choice, only the
> >>worst choice and this might be inaction.  The truth is that even the
> >>worst choice (let's say adopting bash ;-) is better than none:
> >>scripting language is one of the few available avenues of addressing
> >>the feature creep problem .
> >
> >I really don't care that much about the programming language for the
> >scripts as long as it works.
> >
> >I suggested using bash for mc.ext, but others in this list didn't like
> >the idea.  I don't remember the reasons, but it should be archived.
> >
> Actually I was wrong. You idea with bash for mc.ext might actually be
> not a bad idea and can be used in too.

bash is used right now in both files.  Each action is a little script.

> >The real question is whether you want to make mc.ext and single
> >programs or not.  If they are single programs, then the choice of the
> >language doesn't matter - mc will be communicating with a program.
> >
> IMHO they are not monolitic scripts. I suspect that they are closer to
> script libraries, similar to startup scripts, but this does not
> invalidate your idea. For example you can represent shortcuts as soft
> link names for the script.  Variables exposed by mc can be just be
> converted into environment variables like MC_ALTPATH for %D, or MC_FILE
> for %f, etc.  That might help to solve problem with non-working
> substitution in quick cd command.  The question is what is the return on
> the investment in the simplification of code, as better is the enemy of
> good.

Substitution of variables is done by mc.  That alone doesn't make it a
separate language.  We could use environment variables and pipes codes to
communicate with the script, but I don't see any strong reason for that.
Besides, we could lose some features, such as prompts.  It's inherently
unsafe to use dialog code from more than one process running on the same
terminal, so we'll need something similar to the background code to pass
messages to the foreground process.

Substitution has it's limitations, but they should be easier to overcome.
We don't have PATH search and alternate actions in mc (like "use catdoc if
present in PATH, else use word2x, else use strings").  Also, we don't have
string continuation, so the script is sometimes hard to read, let alone
change for a casual user.

If we are going to reuse code in the scripts, there should be a "common"
section that gets copied to every script.  It's also missing.  PATH search
could go there instead on the C code.

My biggest concern is that we allow users to modify the script locally,
but we change the default script as well.  There is no mechanism for
merging user's changes when mc is upgraded.  Ideally, only changed parts
should survive.  But since mc.ext is used as a template, it's saved as
~/.mc/bindings in it's entirety.

All there problems are mostly irrelevant to the language.  Maybe Perl
scripts could be shorter, offer code reuse and simplify PATH search.
Maybe S-Lang scripts could be parsed in the parent process.  But we still
need a protocol for talking to the script and we need a mechanism to
separate local customizations from the distributed default settings.

> >If you want mc.ext and to be sets of scripts that are parsed by
> >mc, then I can say that they are already written in bash.
> >
> I agree about bash. Both looks like a sets of scripts and as I mentioned
> above it might make sense to store them in a way similar to startup
> scripts. To achive the effect of generation of the menu that is used in
> mc you need to have a special option for each script (like status) and
> when invoked with this option the script needs to produces a menu entry
> if conditions are met, nothing if not.

Then you'll have to fork into two scripts - one that decides what
menu items to show, and the other that does the work when the menu item is
selected.  Separation conditions from the code won't make it easier to
edit the menu.

That's why I want to see the code.  Maybe you see how to do it nicely, but
I don't.

> >>The other solution might be exposing API and introducing a FAR-style
> >>plug-ins mechanism, although they have their own problems and having a
> >>two dozens or more of them installed lead to "overcrowding of the
> >>kitchen". But technically this is as simple as introducing plug-in
> >>menu in addition to the user menu or option to call a plug-in in the
> >>user menu.
> >
> >This would lock the API at the time when significant reorganization of
> >code is underway.
> >
> True, but my point is that it's easier then adding scripting language
> because you can partially reuse ideas that FAR and Total Commander APIs.
> And that might suggest some directions of reorganization of the
> codebase.

I'm not looking for new ideas at this point.  I want to fix what is
already present.

> >There are so many issues here, let me comment on them separately.
> >
> >Pure C.  Yes, I appreciate your insight.  Even slightly "impure" C,
> >like C with gcc extensions, could be helpful.  Linux kernel is an
> >example.  It's well structured thanks to macros, initializers of
> >structures, constructors and explicit exports.
> >
> Least common denominator prevails in application. Kernel is quite
> another story and different animal altogether. And the only realistic
> way to implement this you idea of "better-C"  (which is right by itself)
> is to separate mc into a core and peripheral parts.  Which again returns
> us to a scripting language issue and/or plug-in issue.

Compiler specific tricks are out of question for now.  Code separation and
improvement is going on.  We have less interdependencies with VFS now than
in 4.6.0.  glib also helps with things like lists.  VFS file handles are
in a list now.

> >But in fact, the code is screaming for a language with OOP support.  A
> >lot of incorrect interdependencies would have been avoided in the
> >compiler was enforcing some rules.  It's especially important for
> >collaborative projects.
> >
> C++ would probably harm as much as it might help. This is like in a
> quote from 12 chairs "all the money were eaten by the 'apparatus'

I don't know.  I believe many bad hacks we have in the code now would not
have appeared if the project was using C++.  Our init_* and done_*
functions would become constructors, and it would be harder to put
irrelevant code there when it belongs elsewhere.

The scary vfs_add_noncurrent_stamps() would become VfsGcList::Add with the
timeout argument, as opposed to vfs_add_current_stamps(), which would also
be VfsGcList::Add, but without the timeout argument.  Event if the only
comment in that function were "Dijkstra hates me" it would still be easier
to understand what that function does.

We are modeling C++ in many places.  We have classes and limited
inheritance in VFS.  We have 4 types of panels with come common treats
that could have been subclasses of a single parent, that would be a
subclass of Widget.

The only free software project using C++ in which I participated was
IceWM.  It's not bug-free, but it's easier to understand.  The code I
wanted to fix could be found in minutes.  That bug turned out a feature
that couldn't be turned off.  Not the worst kind of bugs.

I'm not aware of any project that suffered because it was using C++
instead of C.  I don't think C++ could be an obstacle with modern GCC and
fast computers.  Situation could have been different in 1995, when mc

Of course, if I was rewriting mc from scratch, I would go with something
more radical, like Ruby or Lisp.  But it's completely unrealistic to
convert the existing code to something so radically different.

> >I know programmers that only work in GUI.  There was the punchcard
> >generation, the command line generation, the text UI generation (TUI), and
> >now we have the GUI generation of programmers.  GNU Midnight Commander
> >serves only the text UI generation of users, and so it's not appealing to
> >the young users and programmers.  It's a niche project.
> >
> IMHO not true. Good young programmers know and use command line,
> sometimes even better then the "old guard". For example the productivity
> of using mc by skilled admin is such that people with pure GUI skills
> cannot compete. And as sysadmins is a mass speciality, in no way mc is a
> niche project and it's *very* appealing to young sysadmins and
> programmers.  Some of my students became expert users of mc or FAR in
> year or so.

Good to know.  That's very different from my experience.

> The second consideration is that it might represent a new generation of
> command line interface for bash (or other unix shell) and one can think
> about mc as a bash extention.  Bash is not going away any time soon no
> matter what interface you prefer. That's why I consider your idea of
> convering mc.ext and into a libraries of bash scripts to be a
> very good idea.

It's your idea.  I didn't say that.  I actually need to think about it.
If we keep separate scripts, it would partly solve the upgrade problem I
mentioned before.

> >There are 3 large free office suites in development, so the programmers
> >resources are not scarce.  Those who started with Microsoft Word are
> >young and have enough time.  Those who started with Norton Commander
> >have serious jobs and in many cases families.  We are endangered
> >species - the text UI programmers with enough time for free software
> >that one won't put on resume like Linux kernel, Apache or cryptography.
> >But let's not generalize.
> >
> You might be overestimating Open Office stuff. It's all Sun's money.

Maybe, I'm judging by the opinions of people who don't participate in its

> I think that open source has more chances on the command line level were
> things are simpler and programs can be smaller.  IMHO the best chances
> to compete on GUI level is to adopt a scripting language with the TK or
> QT or similar library, and that's not an easy game from the point of
> view of speed.

I think it's better to have a choice.  Sometimes speed is essential.

> But for problems with low-level compiled language approach in GUI
> applications just look at Gnome.
> All this "attempt to catch and then leave behind Microsoft" is IMHO
> partially a temporary aberration of the overenthusiastic crowd (or if
> you are in IPO game, a new "make money fast" game) that volunteer
> developers might pay with their health, family life, etc.

I cannot judge the code of GNOME.  Maybe some code is of low quality.  I
don't know.  What was left in the mc codebase was pretty bad, but it were
the first steps, I cannot judge the project by them.  I know that gtk is
quite good inside, even if it's hard to bootstrap.  GIMP 1.3.x is
excellent, but I'm just a user, and not a very demanding one.

> As the complexity grows, the advantage of proprietary development grows
> too and IMHO Microsoft or IBM or Sun or other big software company will
> always have an upper hand in Office-style applications just because they
> can mobilize far bigger resources and to impose stricter discipline,
> that is absolutely necessary in such a tremendous task.
> I am convinced that simplicity is the only advantage of open source and
> thus the command line applications deserve a prominent place among open
> source tools. This advantage will never go away.

I disagree.  Simplicity is a plus, but not a single one.  There are other
advantages strongly correlated with openness of the development process
and motivation of the developers.  It's possible to beat big guys by
making sound design choices and by not being bound by backward
compatibility, third-party code and employees with low skills and
motivations that are hard to replace.

Although this is not about free software, "Beating the Averages" by Paul
Graham demonstrates all my points:

> >OK, I'm afraid I cannot spend any more time on the discussion unless
> >you show me something more specific.
> >
> Agreed.  Thank you for the discussing those issues with me.  Good luck !

I hope my answers have been useful for you.

Pavel Roskin

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]