Swift seems to be trying to deprecate the notion of a string being composed of an array of atomic characters, which makes sense for many uses, but there's an awful lot of programming that involves picking through datastructures that are ASCII for all practical purposes: particularly with file I/O. The absence of a built in language feature to specify a character literal seems like a gaping hole, i.e. there is no analog of the C/Java/etc-esque:
String foo="a" char bar='a'
This is rather inconvenient, because even if you convert your strings into arrays of characters, you can't do things like:
let ch:unichar = arrayOfCharacters[n] if ch >= 'a' && ch <= 'z' {...whatever...}
One rather hacky workaround is to do something like this:
let LOWCASE_A = ("a" as NSString).characterAtIndex(0) let LOWCASE_Z = ("z" as NSString).characterAtIndex(0) if ch >= LOWCASE_A && ch <= LOWCASE_Z {...whatever...}
This works, but obviously it's pretty ugly. Does anyone have a better way?
Swift CharacterCharacter is a data type that represents a single-character string ( "a" , "@" , "5" , etc). Here, the letter variable can only store single-character data.
Swift Language Strings and Characters Unicode Note that the Swift Character can be composed of multiple Unicode code points, but appears to be a single character. This is called an Extended Grapheme Cluster.
A common convention for expressing a character literal is to use a single quote ( ' ) for character literals, as contrasted by the use of a double quote ( " ) for string literals. For example, 'a' indicates the single character a while "a" indicates the string a of length 1.
Character
s can be created from String
s as long as those String
s are only made up of a single character. And, since Character
implements ExtendedGraphemeClusterLiteralConvertible
, Swift will do this for you automatically on assignment. So, to create a Character
in Swift, you can simply do something like:
let ch: Character = "a"
Then, you can use the contains
method of an IntervalType
(generated with the Range
operators) to check if a character is within the range you're looking for:
if ("a"..."z").contains(ch) { /* ... whatever ... */ }
Example:
let ch: Character = "m" if ("a"..."z").contains(ch) { println("yep") } else { println("nope") }
Outputs:
yep
Update: As @MartinR pointed out, the ordering of Swift characters is based on Unicode Normalization Form D which is not in the same order as ASCII character codes. In your specific case, there are more characters between a
and z
than in straight ASCII (ä
for example). See @MartinR's answer here for more info.
If you need to check if a character is in between two ASCII character codes, then you may need to do something like your original workaround. However, you'll also have to convert ch
to an unichar
and not a Character
for it to work (see this question for more info on Character
vs unichar
):
let a_code = ("a" as NSString).characterAtIndex(0) let z_code = ("z" as NSString).characterAtIndex(0) let ch_code = (String(ch) as NSString).characterAtIndex(0) if (a_code...z_code).contains(ch_code) { println("yep") } else { println("nope") }
Or, the even more verbose way without using NSString
:
let startCharScalars = "a".unicodeScalars let startCode = startCharScalars[startCharScalars.startIndex] let endCharScalars = "z".unicodeScalars let endCode = endCharScalars[endCharScalars.startIndex] let chScalars = String(ch).unicodeScalars let chCode = chScalars[chScalars.startIndex] if (startCode...endCode).contains(chCode) { println("yep") } else { println("nope") }
Note: Both of those examples only work if the character only contains a single code point, but, as long as we're limited to ASCII, that shouldn't be a problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With