Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nim Programming Language 0.11.0 released (nim-lang.org)
162 points by def- on April 30, 2015 | hide | past | favorite | 128 comments


It's a little thing, and not uncontroversial, but I'm excited to see another language besides Python adopt whitespace. In general I wish more languages provided a way to enforce code layout.

After using Python professional (and managing a team of developers) for a few years, whitespace is a great way to keep things readable as more people touch code.

Whitespace isn't the only way to do it, of course. Go's "gofmt" is a good example of a different way to enforce code layout. [1]

[1] https://blog.golang.org/go-fmt-your-code


I always was strictly against whitespace in Python because it can lead to all kinds of weird errors. But Nim (and also Haskell) has static types so most wrong indentations are caught by the compiler. That's really nice to have.


I've been a Python dev since 2008 and I can count on one finger where this has actually been an issue. I'd love have you enumerate the "all kinds of weird errors" for me. Not saying you're wrong, mind you, just want to understand.


Since 2000, there were a few times in that first afternoon while doing the tutorial... March 2000? I think, but then I configured my editor to use spaces and subtly show tabs. That was the last time, yeah.

People that complain about whitespace haven't used Python much, it's a dead giveaway.

And yet, the benefits are enormous, I've probably skipped typing a hundred thousand redundant braces since then, and ended up with more readable code.


I have to agree - the only whitespace-related errors I've ever come across are "IndentationErrors" - which are pretty easy to resolve.


I remember a bug caused by that, due to a merge, where an instruction went out of an if block.


Ok this sounds legit. Thank you. But if you think about this carefully it means that two people wrote the exact same expression in different scopes, which seems really, really odd, no? I'm trying to contrive this in my mind and I'm having difficulty (my fault, obvs).

If this happens again it deserves its own blog post describing the situation carefully!


That's scary... this is why I'm unhappy with merge tools that don't understand the underlying language


I wonder if it was an automatic merge or whether someone just didn't eyeball the code properly?


Tabs are considered the same as 8 spaces, so if you ever have a single moment of lax enforcement of "only tabs" or "only spaces" you can get bizarre results.


Jumping on this wagon. Have been using and teaching Python for more than 10 years and have never seen any weirdness. Python actually warns when there are mixed tabs in blocks.


I've always felt the argument against whitespace due to "indenting 2 spaces instead of 4 spaces" or something silly is like trying to argue "Braces are bad because you might forget to close a brace!"

IDE's handle both scenarios. My IDE largely remembers how far in I should be indented and if I use a brace it closes it for me.

If I program in a plain text-editor like Notepad - I still don't mess up my indenting any more than I forget to close a brace or parenthesis. It's a silly argument formed from speculation and "but its possible!" rather than reality.


>> it can lead to all kinds of weird errors

Quite the opposite. Whitespace for me is a godsend - all code is formatted the same way and I rarely if ever have to deal with whitespace related errors. In fact I don't even know what whitespace related errors are - you'd have to explain what you mean.

On the other hand, when programming in JavaScript I seem to spend all my time trying to work out issues related to brackets and code structure - very very annoying and silly - can't computers work brackets out, do we need to tell them what the code structure is?


Ken Arnold said it best in his "Style is Substance"

http://www.artima.com/weblogs/viewpost.jsp?thread=74230


The argument of white space or braces is largely a religious one. Any argument for one case is equally valid for the other as well and it usually just boils down to people trying to justify personal preference. eg:

    Python: white space is a great way to keep things readable as more people touch the code.

    Perl / PHP / etc: braced blocks is a great way to keep things readable if the formatting breaks.

    Python: People can get lost in braces.

    Perl et al: People can forget to tab.
etc

Personally I prefer braces, but I've worked with whitespaces and verbose ALGOL-style blocks too. At the end of the day, all these arguments are just theoretical because most people manage just fine 99% of the time.


> The argument of white space or braces is largely a religious one.

True. That said, my main point is that standardizing on how to layout code is very helpful, either through the compile or using a preprocessor (like gofmt) is helpful, and more languages should encourage it out of the box.


Quite a few languages do standardise style though (eg C# https://msdn.microsoft.com/en-us/library/ff926074.aspx). Just as quite a few (possibly most?) companies will have their own coding policies as well.

The problem is when individuals decide to go off piste and do their own thing. But then those same individuals could equally be likely to bypass gofmt when writing code (which is very easily done), or to ignore company policy about tab spaces et al when writing Python code.

That said, I've learned to love gofmt for the laziness it brings. Text editors that auto-close brackets and such like can often be more of an intrusion than an asset. Where as gofmt only jumps in after the document is saved and the syntax is parsed for errors. So sometimes I find myself just throwing code down knowing that gofmt will tidy it up for me as I go. Which I'd be the first to admit is a pretty bad habit to get into hehehe


IDEs typically close braces not to help you type or to enforce format rules, but to provide better feedback during a language aware editing session. Compilers can do amazing things with error recovery for code being edited, but still have problems inferring missing braces.


You misunderstand me; I wasn't suggesting auto-closing of brackets was a method of enforcing format rules (that would be a dumb statement to make for the very reasons you highlighted).

I was just commenting on how non-intrusive gofmt is when compared to the other tools that alter code while your programming.


Yes, but they are solving different problems so aren't very comparable, even if they both work by modifying code. I just want to clear the air about why we IDE developers go this route.


The fact that they are both modifying code makes them comparable from an end user experience perspective, which was the comparison I clearly made.

I appreciate you wanting to be specific about their intended purpose but, frankly, you're stating the obvious. Anyone who's spent more than 5 minutes inside an IDE (or even basic programming-geared text editor) will already be well versed on the subject. You don't need to be an IDE developer to understand this (and to be honest, what developer hasn't written their own programming environment at some point in their lives anyway? :p )

In any case, I was just making a light-hearted passing comment; hence the "hehehe" at the end of it. It didn't really add anything to my original post and certainly wasn't meant to picked apart in this level of detail.


You'd be surprised how many developers think brace completion or even code competion is trying to help them type. I've been in huge flame wars in the past where users simply misunderstood why a feature was there and what it's benefits were (brace completion...better interactive feedback, code completion...API discovery).


The only case where I have found a brace-type syntax to be superior is when you do quick REPL stuff. With Ruby I frequently do things long, compact reams of statements combining blocks via map, inject, each, etc. that are all written on a single line, and plays nice with Readline history.


My point is that there isn't a "superior" syntax. It's just personal preference.


And my point is that there are pros and cons.


Of course there. I even said this in my first post. But at the end of the day you can draw up a list of a pros and cons about anything that essentially just boils down to personal preference. What I take issue with is when people start using those pros and cons as an argument for superiority when what they really mean is "I personally prefer..."


I prefer white space, though I've learned to live with both by minifying brace only lines to 3 point fonts in my IDE, which might be a nice compromise.

The only problem with white space languages are multi line lambdas.


I notice they have now included the Aporia IDE and Nimble package manager in the Windows installer. This seems like a good step toward getting newcomers up and running quickly, and getting more feedback from the community about the core tools. It seems like many new languages are going with this route of tools supported by core-dev, such as package management. For instance Go ships its code formatting tool, which seems like a good idea. Supporting an IDE seems more questionable, for instance Python never seemed to have much success with IDLE, and IDE preference is very personal. Anyway, as someone who primarily writes Python at work I find Nim to be very usable because the syntax and style is so familiar, and I really like most of the decisions they have made. I prefer Nim's static typing and compiled executables for redistribution, and they have introduced nice new concepts around concurrency and more advanced language features. Definitely looking forward to using this language more and more as it moves toward 1.0


It's very exciting that Nim has generator syntax in a compiled, statically typed language. To me, recursive generators with "yield" and "yield from" expressions are the most natural way to express iterative algorithms.

The Ranges proposal for C++17 is great for consuming generic algorithms, but it's far behind generators for creating generic algorithms. Simple example: try to write an iterator for a tree that doesn't have parent pointers. With ranges or iterators, you must maintain a stack as a data member. With recursive generators, the variables on the stack are captured implicitly in the language. I only know a little about compilers, but it seems like a compiler with syntax tree knowledge of the entire program should be able to optimize generators into machine code that's just as good as iterators - i.e. not copying all the caller-save registers onto the stack every time a generator recurses.

I think generator syntax is truly a killer feature for any kind of serious algorithmic work. C++11 made it a lot easier to write and use generic algorithms, but writing iterators is still a big task that reluctant C++ programmers avoid. IMO, fear (and actual difficulty) of writing iterators is the main reason most people don't enjoy C++. If C++ had shipped with generators from the beginning, I think the programming language landscape would be very different today.

Nim seems to have a small community and limited promotion, so I do not feel especially hopeful that it will compete with C++ at the same level as Rust. (I am not sure if Rust will upset C++ in any significant way either!) Perhaps a well curated set of Nim/Rust/C++ comparisons could convince the C++ standards committee or the Rust team to add generator syntax.


Indeed, ever since making the switch to external iteration, Rust has had a far-future desire for a `yield` construct to simplify the creation of iterators (inspired by C#). See https://github.com/rust-lang/rfcs/issues/388


Congratulations to the Nim team!

I did a quick comparison between the up and coming compiled languages (D, Rust, Nim, Go) and C++ a week or so back. My main aim was to assess the final statically linked binary size for a simple hello world program.

Here are the results on x64 Mint:

1. C++: 1.6 MB

2. Go: 1.9 MB

3. Rust: 550 KB with dependencies (I was not able to figure out how to pass a static option to rustc)

4. D: 710 KB, same as Rust

5. Nim: 970 KB, statically linked

Nim wins, by a large margin. I did not remove any of the runtime checks by the way. Removing them actually saves 100 KB+, depending on the program. Furthermore, the Nim program was actually a naive Fibonacci number calculator, not a simple hello world, meaning that Nim should have been at a disadvantage! Amazing stuff.


> (I was not able to figure out how to pass a static option to rustc)

We statically link by default, actually: https://gist.github.com/steveklabnik/e9e744d23a7a14edcd47

Everything but glibc. Experimental support for musl landed a few days ago.

You can take this _really_ far: http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-...


Yeah, I saw that article. Damn cool stuff. That's why I'm confused about static linking in Rust. Am I doing something wrong?

http://i.imgur.com/gtlJoO0.png


That's the glibc stuff I mentioned above. You generally don't statically link glibc.

I don't have the setup to try out the musl stuff, but there's instructions here: https://github.com/rust-lang/rust/pull/24777


For C++ most of that is iostream.

   $ cat hello.cpp

   #include <cstdio>
   int main(void) { puts("Hello, World!"); }

   $ g++ -static hello.cpp
   $ du -h a.out
   857K	a.out
Compiling for size (-Os -fdata-sections -ffunction-sections hello.c -Wl,--gc-sections) gives 816K

With iostream I get 1.6M


I had to check, but apparently that'll work for utf8 on posix (windows not so much) with unicode literals and std=c++11 (or higher presumably). Nim assumes utf8. For c++:

  puts(u8"World is not spelled 'wørld'");


Interesting point. It makes sense to use stdio.h in applications where size is the priority.


Or where performance is a priority, in my experience. iostream is both slow and bloated.

... I don't like it's API either (it's a horrible abuse of operator<<, and is the exact kind of thing you tell people not to do with operator overloading...)


Does stdio.h work well with std::string? Or do you mean dumping strings entirely and just working with char arrays?


I just meant iostream. std::strings work fine with cstdio via std::string::c_str(). (e.g. printf("%s\n", some_string.c_str() works fine).

That said, I tend to avoid std::string as well, due to other performance issues (you tend to have a lot of copies and lots of heap allocations you don't really need). That said, I work in game development, and don't have to do much string processing, and someone in another domain might find this more painful than I do.


That's cool, but with disk space and memory being as cheap as they are, is this still a big deal in anything but the smallest of embedded environments?


The smallest of embedded environments would love to have language options besides C and assembly, thank you very much. :-)


I agree that disk space has gotten a lot cheaper, but bandwidth speed hasn't increased tremendously. Binary sizes definitely are an issue.

And re:memory, it's pretty tight on most end-user machines. By default, most laptops only come with 4-8GB of RAM. And in the cloud, memory cost is perhaps even more expensive. On ec2, a 'large' instance usually has only between 4-8GB of RAM. The default small instances have only 2GB of RAM, which doesn't give much room left over for applications after you account for the OS and various utilities.

It can definitely add up.


Does anyone else find it obscene that 4 GB of RAM is considered inadequate in a laptop? And I'm not just saying this in response to your comment; that seems to be a common opinion. For example, in the 2012 novel _Off to Be the Wizard_ (which I found quite entertaining), the protagonist answers the question, "What on earth can a person do with 4 gigabytes of RAM?" with, "Upgrade it immediately."


Welcome to the "NodeWebkit" world. Where wanting to open a file bigger than 2MB is considered "unreasonable" and an app taking 200MB of RAM to show a few small files is normal.


There are two sides of this coin.

0) As programs are expected to do more or run faster, they are required to use more computational power and memory. As examples, this is especially true of 3D Rendering or Video Editing software.

1) Because of the price and abundance of hardware - many programmers have stopped caring about nearly all optimization unless it brings their program to a halt. Requiring more computational power and memory to run many of these programs.

To many people $50-$60 for 8GB of 2x4GB DDR3 RAM isn't a lot of money. It's about as much as a date to the movies and cheaper than going out to dinner with the family.


Many people use laptops as their primary workstation these days.


I think this kind of thing demonstrates portability at a lower cost and better compile-time optimizations. You would think the other languages, especially C++ and Rust, would do better in the latter.


The real problem is that glibc has a hard time being statically linked. We just use the native system C library in Rust, which on Linux is usually glibc.

We could focus on musl support in order to get true static linking, but there are much higher priority things like 1.0 stability, optimizations, and compiler performance (though we'd take a patch of course). The fact is that pure static linking is rarely done anymore, at least on mainstream desktop/mobile/server systems, so it's not on anybody's critical path.

Dynamically linking to glibc does not really provide "better compile-time optimizations". LLVM already understands the libc functions that really benefit from inlining, like memcpy.


Optimizing for size often has an inverse impact on runtime, for instance if the compiler would have inlined heavily.

edit: tense clarification


It's small with a small program, but for something substantially larger it can certainly make a difference


> 4. D: 710 KB, same as Rust > 5. Nim: 970 KB, statically linked > Nim wins, by a large margin.

970 KB > 710 KB.


But 970 KB < 710 KB + larger dependencies. Both Rust and D have dependencies, as mentioned by the parent comment.

You can argue whether or not the dependencies should be included in an executables size, but I think the comment is trying to show that Nim can create a complete program (with no dependencies) with less space.


Didn't he mention that these binaries are statically linked? i.e. no external dependencies?


"I was not able to figure out how to pass a static option to rustc"


I mentioned that I couldn't get a fully static binary with Rust and D.


sounds like the D binary was not statically linked.


It was not. I couldn't find the correct compiler flag.


Congratulations to the developers for the impressive list of improvements and fixes! Nim is becoming my first choice of programming language in a large number of situations.

I hope to see version 1.0 soon. Keep up the good work!


Good to see the Nim project moving closer to a solid 1.0.

Now all we need is a cross-platform GUI library that uses native widgets on each platform, such as a set of bindings for wxWidgets or (better yet) a translation of SWT from Java to Nim, and we'd have an excellent foundation for self-contained, cross-platform desktop apps. Yes, I know desktop apps aren't trendy (at least on platforms other than Mac), but they're still important.


Does Nim still allow indifferent `fooBar()` and `foo_bar()` to access the same function call ? It makes grep-powered refactoring unnecessarily hard.


Yes, Nim still uses partial case insensitivity: http://nim-lang.org/manual.html#identifier-equality

I agree that it's a problem with grep, there's a nimgrep tool that you could use instead, but I don't.

The nice part about it is that you can use a consitent naming convention even when using external libraries.

My idea to solve the problems this causes without removing it would be a gofmt like tool as described here: https://github.com/Araq/Nim/wiki/GSoC-2015-Ideas#nimfmt-auto...


Is there any way to disable this behavior? I've been curious about Nim, but this makes me cringe pretty hard. I get the intention, but this is exactly the kind of magic that I dislike.


> The nice part about it is that you can use a consitent naming convention even when using external libraries.

I think a slightly nicer (in the sense that it would catch more typos) way to do it might be to specify the naming convention at the top of the source file, and have the compiler enforce that naming convention within the file.

Maybe also have a way to set it in the build system, so it could enforce it for the whole project. That way your project is consistent, but the libraries you rely on don't have to match your conventions.


I was about to say the same thing (about enforcing the convention directly.) But there's a problem. Let's say I write a function called `HTTPHeaderScan()`. Someone else might have used `HttpHeaderScan()`. Or worse: `HtTpHeAdErScAn()`. How do you enforce one over the other? If you said "everthing must be CamelCase," how do you know that the name is wrong?


Assuming you have a consistent naming scheme for your language I'd assume this would be mostly used to call libraries written in other languages


Oh god, this is so horrible. I've never seen this in the many languages I've used and consider all the programmers happily using those languages without this "feature". Please kill this feature now while the language is still relatively young.


Your comment made me wonder if this feature was conceived as a red herring just to keep people from going on about significant whitespace. :-)

If I ever design a language I might have to remember that.


Ha! Great idea. Significant whitespace is a good thing. "partial case insensitivity"? KIWF! Or at least let it be overridden via configuration flags.


I haven't used go or gofmt, but I don't quite understand the appeal. gofmt doesn't go both ways, right? You can't write go in one style and then gofmt it to share, and then un-gofmt it back to your preferred style, right? Seems like it would still be easiest to just get comfortable writing and reading in the style gofmt produces. At that point, gofmt is just for catching little mistakes, which, maybe the compiler could just do for you. I see python and nim's significant whitespace as a step in that direction. Why not have the compiler/interpreter enforce a certain number of spaces for indentation? Why not enforce a certain naming style?


>Seems like it would still be easiest to just get comfortable writing and reading in the style gofmt produces.

That's the entire point of gofmt. Like many of Go's design choices, it's there to subtly (or not so subtly) encourage doing things in one consistent way.


Looks like it's still there in the manual:

http://nim-lang.org/0.11.0/manual.html#lexical-analysis-iden...

This just killed my excitement for this language. That's really surprising to me that they would choose to enforce indentation Python-style, but would then allow this kind ambiguity in naming.

grep and emacs isearch-forward are such great tools for quick code searching and this will break them. I guess text search (and replace) tools could grow nim-mode options, maybe. I don't know, can someone convince me this is a good idea?


Suppose you have MyVariable and my_variable in your code. Which is worse: them being the same variable or different variables? It's a pretty bad code smell either way.

In my opinion it should be an error or at least a warning.

I'm not a big fan of nim's behaviour, but I don't think it's any worse than what other languages do.


"Which is worse: them being the same variable or different variables?"

Them being the same is worse. I agree that both are pretty darn bad, but it's also pretty clear to me which is worse.


> Suppose you have MyVariable and my_variable in your code. Which is worse: them being the same variable or different variables?

Having them be the same, by far.

Having names with certain similarity be prohibited in the same scope is a bit excessively controlling but sensible. Allowing them but treating them as equivalent is just plain bad. If I named them differently then either: (1) I made a mistake, or (2) I intended them to be different.


Yeah, I think it would make more sense to treat them as the same, but throw an error if they are spelled differently.


> Suppose you have MyVariable and my_variable in your code. Which is worse: them being the same variable or different variables?

That's the same as saying "`a` and `A` should be the same variable", i.e. complete case insensitivity.

> It's a pretty bad code smell either way.

Very much so. We shouldn't encourage it.


Huh? This is terrible.

I use same names with different styles to denote scope. I will be hosed then.


This is my least favorite "feature" of Nim as well and almost killed my interest in it. But it hasn't come up in practice yet for me.


> I don't know, can someone convince me this is a good idea?

That would be a pretty hard sell. Any convenience argument falls flat to me.


> grep and emacs isearch-forward are such great tools for quick code searching and this will break them.

If you're using Emacs to begin with... well there is little reason to complain about the default behaviour of functions and keybindings.


Sure, but I don't use emacs exclusively (as often as that joke is leveled at emacs users, nobody really does that).


You must not use grep with regexes very often.


Are you serious? Quick, write me a grep regexp that will catch all equivalently valid forms of a nim_Identifier (or nimidentifier or nImID_entif_iEr, but not NimIdentifier).


Not really, sorry. I had forgotten that the first letter is compared case sensitively as I remembered from Nimrod it was all insensitive. But I still think for nearly all cases in real world code you can instead of using nimgrep be equally served by a case insensitive search on (nimidentifier|nim_identifier) even if it's technically possible to have other variants or a mismatch, which you can grep out as a second step anyway, with the capital N.


This feature makes very little sense to me. What's the justification for this?


Developer convenience. Some people like Hungarian notation, some like underscores. I consider it quite convenient as well. Probably there is a need for a tool for source code normalization so that everyone can choose his favorite notation in the whole Nim ecosystem.


I wouldn't consider the loss of identifier recognition a "convenience". This would be a major concern for my adopting the language.

You also have to read others' code, as well...



This doesn't make any sense. When I make a mistake in case, Rust just doesn't let me compile.


You get this problem when you have two identifiers (variables, functions, constants, etc.) that differ only in case and have the same type signature. Depending on whether you use one or the other form, you may get completely different behavior.


The convention is that upper case is types, lower case is variables. So you'd have to try hard to make that mistake.


That has nothing to do with types vs. variables.

Try having a variable signon and a variable signOn. Or two functions with the same type signature, one called signon() and one signOn() (for example, imported from two different modules).


By default, the Rust compiler warns you when you use uppercase characters in variable and function identifiers. The community overwhelmingly adheres to this style. Rust is also extremely strongly-typed, so not only would you have to be willfully disabling the style warnings, you would have to have to manage to give them the same types, and furthermore given the nature of Rust code it's rather possible that you'll still get a compilation error due to using a variable in a way that it didn't expect. You also can't "accidentally" introduce new variables in Rust, as you can in Python via typos.


Yes, enforcing case strictly via style guidelines that prevent mixing of upper- and lowercase is another way of avoiding that problem (underscores can still trip you up in a similar fashion, of course, e.g. signon vs. sign_on). I'm still not sure how the discussion got sidetracked into discussing Rust, though.


> I'm still not sure how the discussion got sidetracked into discussing Rust, though.

Yes, let's generalize - reading the blog and the comments, this "case sensitivity ruins productivity"-problem seems to only be a problem in scripting languages.


You'd have to declare both of them in the current module. That's why I don't do glob imports.

I would see

    use foo::signon;
    use bar::signOn;
clearly, if I have to import this in my own file I should be VERY careful about those two functions

or rather, I'd import from foo, but I'd use the namespaced version from bar

so `signon` would be `foo::signon`, but `bar::signOn` would be written out fully to avoid this kind of clash


I'm really not sure what the point of referencing Rust is. This was about programming languages in general, and the post I was referring to was about C#. Also, importing is an example, another example would be accidentally declaring a second variable in the same module (should not happen, but people make errors), or a number of other situations where two identifiers that are identical except for case end up in the same scope.


I'm referencing Rust because I'm working on a Rust project right now. I don't even know C#.

Here's how you declare a binding:

    let signOn;
    let signon;
I seriously hope you don't declare both of these in the same scope


Safety! Having "my_cat" and "myCat" as different variables in the same space sometimes ends up with using the wrong variable, especially when using tab completion.

The feature is not meant to encourage developers to use "my_cat" and "myCat" randomly within the same file.


Safety might prohibit different identifiers with a certain degree of similarity, but it wouldn't treat different-but-similar identifiers as if they were the same.

Treating them as the same creates more danger -- the danger that the programmer intended them to be different, but the language treated them the same -- rather than mitigating danger.


"The feature is not meant to encourage developers to use 'my_cat' and 'myCat' randomly within the same file."

I'm sure it's not, but it isn't doing a thing to discourage that. We stopped using case insensitive file systems a long time ago, didn't we?


...actually we didn't.


> We stopped using case insensitive file systems a long time ago, didn't we?

Windows didn't.

But even Windows doesn't use an underscore-insensitive file system.


I don't like the idea of a language encouraging sloppy coding.


This is not different than having foo_bar and foo_Bar as two different variables.


Well, yes, it is.

And its worse, especially when the variables aren't "foo_bar" and "foo_Bar", but, say, "cycle_often" and "cycle_of_ten".


> "cycle_often" and "cycle_of_ten".

Such an error is caught by the compiler.

  const cycle_often = 10
  var cycle_of_ten : float

  ambig.nim:2:5: Error: redefinition of 'cycle_of_ten'


I'd rather have them be different identifiers but with a compiler warning if they're found in the same file.


Sometimes the best name for a variable is a variation of the associated class/function name.


Why not just recommend one way of naming things and do it in all standard APIs? If someone obviates in his custom library, that's his problem. I think below are pretty much a standard in many languages now.

THIS_IS_A_CONSTANT SomeClass someVariable someMethod()


I hadn't seen this language before, but it looks neat. Coming from a Ruby background, it's very interesting to see Nim's pragmas and templates, as they look rather like type-checked and compiled cousins of metaprogramming methods.


I concur. This is the first time I'm seeing it too. I come from a C++ background, with recent experience in Python and JavaScript. I'm rarely impressed by new languages, but for some reason, Nim feels like what I wish Python was.


Indeed. But Nim is much more expressive than Python. A very nice feature of Nim is clean macros. They support regular string matching in native Perl syntax. For instance.

  if line =~ rx"http[s]://(.+)":
    var url = matches[0]
    ...
where rx is defined as:

  # partial regular expressions as in Perl
  template rx * (arg: expr): expr =
    re(r".*?" & arg & ".*")
Another macro instance:

  template repeat * (body: stmt): stmt {.immediate.} =
    block:
      while (true):
        body

  template until * (cond: expr): stmt {.immediate.} =
    block:
      if cond:
        break

  # Sample:
  #
  #  var i=0
  #  repeat:
  #    echo i
  #    i += 1
  #    until i==7
Nice, isn't it?


> starting with version 1.0, we will not be introducing any more breaking changes to Nim.

I wonder whether they will stay committed to that. Pretty much all developers of other "living" languages I know who do not have strong industry support and thus pressure not to break working code introduce breaking changes all the time e.g. see D.

Nim could distinguish itself from the crowd with such a commitment to stability. However I expect overwhelming pressure from the enthusiast crowd which currently utterly dominates among Nim users to introduce breaking changes even post-1.0. And with little pressure from the other direction..


There will be plenty of pressure in the other direction, because of the fact that they announced it.

It's a selling point for the language - a green light for companies to use the language in production environments, and when they update things they don't want their existing software to break.


Maybe off-topic, but why exactly do we have hard and soft real-time? I think this is terrible, either you have deadlines or you don't. E.g. in games which lag you are missing deadlines and the result is terrible.

Currently it seems that we distinguish between hard-real-time, soft-real-time and no-real-time, but probably we should just have hard realtime everywhere and if we do not care, we just give big numbers. I mean even if you are not hard realtime you're are probably expecting results before the heat death of the universe, so maybe it would be a nice idea to NOT abstract time in programming languages but give the possibility to the programmer to annotate timing constraints in e.g. function's definitions? E.g. such information could be crucial to the GC.

Maybe we could build such a mechanism into a fancy type theory?


Network service is a good example for soft real-time systems. Under high load, it is preferable to delay the responses a bit instead of starting to drop requests.


Yeah, but isn't that just hard real-time with bigger deadlines?


No. Say we choose 100ms as the (soft) deadline, since some study showed users consider a response time anywhere within 100ms to be "instant". Under light load, all requests are served within 100ms. Under heavy load, response time of some requests can exceed 100ms, but the scheduling algorithm will (likely) try to minimize the quality loss.

If we increase the deadline to be 200ms, then even under light load, the response time of some requests can exceed 100ms. That's not what you want.


But in a hypothetical real-time programming environment you could switch strategy at runtime (which of course limits your heuristics as it also has to be hard real-time; that is probably the hardest part). E.g. you could have different code path depending on "load" (which is probably a vague term) and switch to different GC's implementations on the fly.

What I mean is, that you always have hard real-time in the end. E.g. you could make your network service with a hard real-time limit of 100ms. On light load you serve all requests as you mentioned with ease, if load gets heavier you hit an uncommon situation (by definition, otherwise you would not be satisfied with soft-realtime) and everything changes, as you hit a rare situation. You could do different things, e.g. in case of a news page just serve plain text and get rid of images and while you are doing that you also change to a different GC that only guarantees 500ms.


Erlang was made for soft real-time. So it's a real thing and accord to wiki:

https://en.wikipedia.org/wiki/Real-time_computing#Criteria_f...

    Hard – missing a deadline is a total system failure.

    Firm – infrequent deadline misses are tolerable, but may degrade the system's quality of service. The usefulness of a result is zero after its deadline.

    Soft – the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.


I really wish I had a use-case for this powerful static language, but alas I am no ninja, guru, zen-master, sensei or rocket-scientist who is working on a problem that really needs this level of power/performance.

Python will always be "good enough".


Maybe you don't need performance, but you like laziness; by being compiled and statically checked, nim can catch many bugs automatically with default tools.


As much as I love it, Python is far from "good enough" for writing a system library.


I want to build something like meteor that leverages nim's compile to JS and C.


> Negative indexing for slicing does not work anymore! Instead of a[0.. -1] you can use a[0.. ^1].

What's the justification for this change?


The following discussion on Github may answer your question https://github.com/Araq/Nim/issues/1979




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: