I'd always thought the wumaodang thing was more about discouraging speech by making it hard to wade through rivers of comment garbage than about actually influencing public opinion. And this seems to be the direction of wider modern thinking on the matter:
Why would anyone bother with such tactics, given how hard it might be for a bot—which has few contacts and no meaningful history of tweeting—to persuade humans? First, persuasion may not be the goal. Some bots exist only to make it harder to discover timely factual information about, say, some ongoing political protests. All that investment in bots may have paid off for the Kremlin: During the protests that followed the disputed parliamentary elections in December 2011, Twitter was brimming with fake accounts that sought to overwhelm the popular hashtags with useless information. One recent studyclaims that of 46,846 Twitter accounts that participated in discussing the disputed Russian elections, 25,860—more than half!—were bots, posting 440,793 tweets on the subject.
The author notes that various Tibet related hashtags have been rendered unusuable by being filled with crap. I suspect we're going to see increasing employment of similar techniques by various parties here as well, complicated by the fact that party loyalists tend to sound like bots on twitter anyway.