100% Guaranteed Results


CS143 – 1.
$ 24.99
Category:

Description

5/5 – (1 vote)

(a) The set of all strings where no two consecutive characters are the same.
(b) The set of all strings representing a binary number that is a power of 2. Allow for leading zeros e.g. 001000.
(c) The set of all strings containing at least one of 1110 or 1011.
2. Draw the DFAs for each of the languages from Question 1.
3. Using the techniques covered in class, transform the following NFAs with -transitions over the given alphabet Σ into DFAs. Note that a DFA must have a transition defined for every state and symbol pair, whereas a NFA need not. You must take this fact into account for your transformations. Hint: Is there a subset of states the NFA transitions to when fed a symbol for which the set of current states has no explicit transition?
(a) Σ = {a,b,c}

(b) Σ = {a,b,c}
a, c b

1
(c) Σ = {a,b}

b
4. Let L be the language over Σ = a,b,c where the following holds: w is in L if at most one character has more than one occurrence in w.
Examples of Strings in L: a, baaaaaac, Examples of Strings not in L: ccbb, abcab
Draw an NFA for L.
5. Consider the following tokens and their associated regular expressions, given as a flex scanner specification:
%%
(01|10) printf(“snake”)
0(01)*1 printf(“badger”)
(1010*1|0101*0) printf(“mushroom”)
6. Recall from the lecture that, when using regular expressions to scan an input, we resolve conflicts by taking the largest possible match at any point. That is, if we have the following flex scanner specification:
%%
do { return T_Do; }
[A-Za-z_][A-Za-z0-9_]* { return T_Identifier; }
and we see the input string “dot”, we will match the second rule and emit TIdentifier for the whole string, not TDo.
However, it is possible to have a set of regular expressions for which we can tokenize a particular string, but for which taking the largest possible match will fail to break the input into tokens. Give an example of a set of regular expressions and an input string such that:
a) the string can be broken into substrings, where each substring matches one of the regular expressions, b) our usual lexer algorithm, taking the largest match at every step, will fail to break the string in a way in which each piece matches one of the regular expressions. Explain how the string can be tokenized and why taking the largest match won’t work in this case.
2

Reviews

There are no reviews yet.

Be the first to review “CS143 – 1.”

Your email address will not be published. Required fields are marked *

Related products