Slice a string containing Unicode chars

  • A+
Category:Languages

I have a piece of text with characters of different bytelength.

let text = "Hello привет"; 

I need to take a slice of the string given start (included) and end (excluded) character indices. I tried this

let slice = &text[start..end]; 

and got the following error

thread 'main' panicked at 'byte index 7 is not a char boundary; it is inside 'п' (bytes 6..8) of `Hello привет`' 

I suppose it happens since Cyrillic letters are multi-byte and the [..] notation takes chars using byte indices. What can I use if I want to slice using character indices, like I do in Python:

slice = text[start:end] ?

I know I can use the chars() iterator and manually walk through the desired substring, but is there a more concise way?

 


Possible solutions to codepoint slicing

I know I can use the chars() iterator and manually walk through the desired substring, but is there a more concise way?

If you know the exact byte indices, you can slice a string:

let text = "Hello привет"; println!("{}", &text[2..10]); 

This prints "llo пр". So the problem is to find out the exact byte position. You can do that fairly easily with the char_indices() iterator (alternatively you could use chars() with char::len_utf8()):

let text = "Hello привет"; let end = text.char_indices().map(|(i, _)| i).nth(8).unwrap(); println!("{}", &text[2..idx]); 

As another alternative, you can first collect the string into Vec<char>. Then, indexing is simple, but to print it as a string, you have to collect it again or write your own function to do it.

let text = "Hello привет"; let text_vec = text.chars().collect::<Vec<_>>(); println!("{}", text_vec[2..8].iter().cloned().collect::<String>()); 

Why is this not easier?

As you can see, neither of these solutions is all that great. This is intentional, for two reasons:

As str is a simply UTF8 buffer, indexing by unicode codepoints is an O(n) operation. Usually, people expect the [] operator to be a O(1) operation. Rust makes this runtime complexity explicit and doesn't try to hide it. In both solutions above you can clearly see that it's not O(1).

But the more important reason:

Unicode codepoints are generally not a useful unit

What Python does (and what you think you want) is not all that useful. It all comes down to the complexity of language and thus the complexity of unicode. Python slices Unicode codepoints. This is what a Rust char represents. It's 32 bit big (a few fewer bits would suffice, but we round up to a power of 2).

But what you actually want to do is slice user perceived characters. But this is an explicitly loosely defined term. Different cultures and languages regard different things as "one character". The closest approximation is a "grapheme cluster". Such a cluster can consist of one or more unicode codepoints. Consider this Python 3 code:

>>> s = "Jürgen" >>> s[0:2] 'Ju' 

Surprising, right? This is because the string above is:

  • 0x004A LATIN CAPITAL LETTER J
  • 0x0075 LATIN SMALL LETTER U
  • 0x0308 COMBINING DIAERESIS
  • ...

This is an example of a combining character that is rendered as part of the previous character. Python slicing does the "wrong" thing here.

Another example:

>>> s = "fire" >>> s[0:2] 'fir' 

Also not what you'd expect. This time, fi is actually the ligature , which is one codepoint.

There are far more examples where Unicode behaves in a surprising way. See the links at the bottom for more information and examples.

So if you want to work with international strings that should be able to work everywhere, don't do codepoint slicing! If you really need to semantically view the string as a series of characters, use grapheme clusters. To do that, the crate unicode-segmentation is very useful.


Further resources on this topic:

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: